Abstract

Single-image dehazing is a critical problem since haze existence degrades the quality of images and hinders most advanced computer vision tasks. Early methods solve this problem via the atmospheric scattering model, which estimate the intermediate parameters and then recover a clear image by low-level priors or learning on synthetic datasets. However, these model-based methods do not hold in various scenes. Recently, many learning-based methods have directly recovered dehazed images from inputs, but these methods fail to deal with dense haze and always lead to color distortion. To solve this problem, we build a recurrent grid network with an attention mechanism, named RGNAM. Specifically, we propose a recurrent feature extraction block, which repeats a local residual structure to enhance feature representation and adopts a spatial attention module to focus on dense haze. To alleviate color distortion, we extract local features (e.g., structures and edges) and global features (e.g., colors and textures) from a grid network and propose a feature fusion module combining trainable weights and channel attention mechanisms to merge these complementary features effectively. We train our model with smooth L1 loss and structural similarity loss. The experimental results demonstrate that our proposed RGNAM surpasses previous state-of-the-art single-image dehazing methods on both synthetic and real haze datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.