Abstract

The existing deep learning-based image dehazing algorithms commonly employ an encoder–decoder structure to learn a direct mapping from hazy images to haze-free images. However, these state-of-the-art methods often fail to consider the varying contents of hazy images across different scenes, resulting in unsatisfactory dehazing outcomes. To address this issue, this paper attempts to integrate a novel instance-aware subnet into the classic encoder–decoder structure in order to achieve a clear separation between figure and background, conducting the selective incorporation of instance features into the dehazing network. Specifically, we introduce a novel architecture called the hybrid residual attention network, which is capable of separately extracting full-image features and instance-level features. This architecture incorporates attention mechanisms and a multi-scale dilated convolution structure, enabling adaptive perception of haze density in different scenes. Additionally, we introduce a global feature fusion subnet that employs a pixel attention structure to fuse features from the entire image and multiple individual instances, thus being aware of instance features. Compared to existing methods, our approach offers a major advantage in accurately estimating the haze density of individual instances, reducing color distortion, and mitigating noise amplification in the output images. Experimental results demonstrate that our method outperforms existing methods across different evaluation metrics and testing benchmarks. Therefore, we believe that our method will serve as a valuable addition to the current collection of artificial intelligence models and will benefit engineering applications in video surveillance and high-level computer vision tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.