Abstract

Single image dehazing, which aims at restoring a haze-free image from its correspondingly unconstrained hazy scene, is a fundamental yet challenging task and has gained immense popularity recently. However, the images recovered by some existing haze-removal methods often contain haze, artifacts, and color distortions, which severely degrade the visual quality and have negative impacts on subsequent computer vision tasks. To this end, we propose a network combining multi-scale hierarchical feature fusion and mixed convolution attention to progressively and adaptively enhance the dehazing performance. The haze levels and image structure information are accurately estimated by fusing multi-scale hierarchical features, thus the model restores images with less remaining haze. The proposed mixed convolution attention mechanism is capable of reducing feature redundancy, learning compact and effective internal representations and highlighting task-relevant features, thus, it can further help the model estimate images with sharper textural details and more vivid colors. Furthermore, a deep semantic loss is also proposed to highlight essential semantic information in deep features. The experimental results show that the proposed method outperforms state-of-the-art haze removal algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.