Abstract

Image dehazing aims to restore the missing high-quality content from its original hazy observation. Most of the existing learning-based methods achieve promising achievements by designing various networks. However, these approaches cannot generalize well on real-world scenes, since they fail to exploit natural haze priors. Towards this end, we propose a novel Semi-supervised Progressive Dehazing Network (Semi-PDNet), which leverages both synthetic and real-world images in training process. The overall network follows a progressive architecture, and can be divided into three core stages: image encode stage (IES), feature enhance stage (FES) and hierarchical reconstruction stage (HRS). Specifically, IES is responsible for encoding shallow features from the corrupted hazy image. Then, our FES tries to distill finer local and global features via the well-designed dual stream attentive block (DSAB). The HRS is to estimate semantic and contextual information based on a hierarchical structure, and accurately reconstructs the final clear image. This stage-by-stage paradigm can make full use of informative features from shallow to deep, thus facilitating network for better haze removal. Furthermore, we utilize an unlabeled contrastive guidance (UCG) to bridge the domain gap between synthetic and real-world images. Extensive experimental comparisons show that our Semi-PDNet can obtain comparable results with other state-of-the-art dehazing algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.