Abstract

Single-image dehazing is a critical problem since haze existence degrades the quality of images and hinders most advanced computer vision tasks. Early methods solve this problem via the atmospheric scattering model, which estimate the intermediate parameters and then recover a clear image by low-level priors or learning on synthetic datasets. However, these model-based methods do not hold in various scenes. Recently, many learning-based methods have directly recovered dehazed images from inputs, but these methods fail to deal with dense haze and always lead to color distortion. To solve this problem, we build a recurrent grid network with an attention mechanism, named RGNAM. Specifically, we propose a recurrent feature extraction block, which repeats a local residual structure to enhance feature representation and adopts a spatial attention module to focus on dense haze. To alleviate color distortion, we extract local features (e.g., structures and edges) and global features (e.g., colors and textures) from a grid network and propose a feature fusion module combining trainable weights and channel attention mechanisms to merge these complementary features effectively. We train our model with smooth L1 loss and structural similarity loss. The experimental results demonstrate that our proposed RGNAM surpasses previous state-of-the-art single-image dehazing methods on both synthetic and real haze datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call