Abstract
Hazy weather conditions feed serious haze effects, such as color and visibility degradations in the captured videos. Extensive research on video dehazing has been carried out. However, they lack to balance the spatial and high-level contextual information during restoration. Being a complex task in this paper, we propose a cross-stage recurrent feature sharing network to balance the competing objective. In the proposed network, a cross-stage feature merging module operates on learned features of the previous video frame using the proposed encoder decoder based generator architecture. Learned features fed into the same encoder decoder framework for the current video frame feature learning. This module benefit the network while restoring the current video frame. It shares the feature information between two consecutive frames as they have a very minute information change. Meanwhile, the design of our encoder decoder generator architecture is based on a multi-receptive resolution module, feature fusion module, and pixel-wise spatial attention module. These modules mainly focus on acquiring broad contextual information, fusing intermediate-stage features to increase the learning ability of the network, and capturing robust features while discarding unnecessary features. Experimental evaluation results and ablation study show that the proposed network is superior to existing state-of-the-art methods for video dehazing.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.