PROJECT TITLE :
Relative CNN-RNN Learning Relative Atmospheric Visibility From Images
This paper proposes a deep learning strategy for directly predicting the relative air visibility from outside shots without having to rely on pricey weather images or data. In order to learn about a wide range of scene and visibility variations, our data-driven approach uses a big collection of Internet photographs. Using the relative support vector machine, which has a good ranking representation, and the data-driven deep learning features derived from our novel CNN-RNN model, we developed the relative CNN-RNN coarse-to-fine model, where CNN stands for convolutional neural network and RNN stands for recurrent neural network. It is possible to connect a coarse-to-fine RNN with a CNN via shortcut connections. Global view is captured by CNN, while RNN demonstrates how humans move their focus from the entire image to the most distantly perceived location (local). This relative model can be used to forecast absolute visibility in certain situations. We conduct a wide range of tests and comparisons to ensure the validity of our process. An annotated collection of roughly 40000 photos and 0.2 million human annotations has been constructed. This publication will be accompanied with a large-scale, annotated visibility dataset.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here