Abstract

Outdoor images are often deteriorated due to the presence of haze in the atmosphere. Conventionally, the single image dehazing problem aims to restore the haze-free image. Previous successful approaches have utilized various hand-crafted features/priors. However, such images suffer from color degradation and halo artifacts. By way of analysis, these artifacts, in general, prevail around the regions with high-intensity variation, such as edgy structures. This finding inspires us to consider the Laplacians of Gaussian (LoG) of the images which exceptionally retains this information, to solve the problem of single image haze removal. In this line of thought, we present an end-to-end model that learns to remove the haze based on the per-pixel difference between LoGs of the dehazed and original haze-free images. The optimization of the proposed network is further enhanced by using the adversarial training and perceptual loss function. The proposed method has been appraised on Synthetic Objective Testing Set (SOTS) and benchmark real-world hazy images using 16 image quality measures. Based on the Color Difference (CIEDE 2000), an improvement of ∼ 15.89% has been observed over the state-of-the-art method, Yang et al. [50]. An ablation study has been presented at the end to illustrate the improvements achieved by various modules of the proposed network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call