Abstract
In airborne videos surveillance, moving object detection and target tracking are the key steps. However, under bad weather conditions, the presence of clouds and haze or even smoke coming from buildings can make the processing of these videos very challenging. Current cloud detection or classification methods only consider a single image. Moreover, the images they use are often captured by satellites or planes at high altitudes with very long ranges to clouds, which can help distinguish cloudy regions from non-cloudy ones. In this paper, a new approach for cloud and haze detection is proposed by exploiting both spatial and temporal information in airborne videos. In this method, several consecutive frames are divided into patches. Then, consecutive patches are collected as patch sets and fed into a deep convolutional neural network. The network is trained to learn the appearance of clouds as well as their motion characteristics. Therefore, instead of relying on single frame patches, the decision on a patch in the current frame is made based on patches from previous and subsequent consecutive frames. This approach, avoids discarding the temporal information about clouds in videos, which may contain important cues for discriminating between cloudy and non-cloudy regions. Experimental results show that using temporal information besides the spatial characteristics of haze and clouds can greatly increase detection accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.