Abstract

Single image dehazing is a challenging computer vision task for other high-level applications, e.g., object detection, navigation, and positioning systems. Recently, most existing dehazing methods have followed a “black box” recovery paradigm that obtains the haze-free image from its corresponding hazy input by network learning. Unfortunately, these algorithms ignore the effective utilization of relevant image priors and non-uniform haze distribution problems, causing insufficient or excessive dehazing performance. In addition, they pay little attention to image detail preservation during the dehazing process, thus inevitably producing blurry results. To address the above problems, we propose a novel priors-assisted dehazing network (called PADNet), which fully explores relevant image priors from two new perspectives: attention supervision and detail preservation. For one thing, we leverage the dark channel prior to constrain the attention map generation that denotes the haze pixel position information, thereby better extracting non-uniform feature distributions from hazy images. For another, we find that the residual channel prior of the hazy images contains rich structural information, so it is natural to incorporate it into our dehazing architecture to preserve more structural detail information. Furthermore, since the attention map and dehazed image are simultaneously predicted during the convergence of our model, a self-paced semi-curriculum learning strategy is utilized to alleviate the learning ambiguity. Extensive quantitative and qualitative experiments on several benchmark datasets demonstrate that our PADNet can perform favorably against existing state-of-the-art methods. The code will be available at https://github.com/leandepk/PADNet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.