Abstract

Single image dehazing is a challenging vision task that recovers haze-free images from observed hazy images. Recently, numerous learning-based dehazing methods have been proposed and achieved promising performance. However, most of them suffer from a heavy computation burden, and even worse, they cannot leverage negative-orient supervision information well in the training stage. To address the above issues, we propose a novel dehazing method called Task-related Contrastive Network (TC-Net). For a better trade-off between performance and parameters, we design a compact dehazing network based on autoencoder-like architecture. It mainly includes two key modules: the feature enhanced module and the attention fusion module, which can improve the feature representation capability of the network and preserve details information, respectively. More importantly, we propose a task-related contrastive learning framework to fully exploit the negative-orient supervision information. To be specific, we utilize various task-specific data augmentation approaches (e.g., blur, sharpening, color, and light enhancement) to generate informative positive samples and hard negative samples, respectively. Furthermore, we employ an efficient and task-friendly feature embedding network, i.e., the encoder of the dehazing pipeline instead of the pre-trained model, to encode augmented samples into the latent space where negative-orient supervision information can be fully leveraged by contrastive constraint. Extensive experiments demonstrate that our TC-Net can obtain remarkable performance compared with other state-of-the-art dehazing methods, by extreme PSNR gains of 1.75 dB and 0.52 dB, on SOTS and Dense-Haze datasets, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.