Abstract
Haze is a natural distortion to the real-life images due to the specific weather conditions. This distortion limits the perceptual fidelity, as well as information integrity, of a given image. Image dehazing for the observed images is a complicated task because of its ill-posed nature. This study offers the Deep-Dehaze network to retrieve haze-free images. Given an input, the proposed architecture uses four feature extraction modules to perform nonlinear feature extraction. We improvise the traditional U-Net architecture and the residual network to design our architecture. We also introduce the l1 spatial-edge loss function that enables our system to achieve better performance than that for the typical l1 and l2 loss function. Unlike other learning-based approaches, our network does not use any fusion connection for image dehazing. By training the image translation and dehazing network in an end-to-end manner, we can obtain better effects of both image translation and dehazing. Experimental results on synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms. We trained our network in an end-to-end manner and validated it on natural and synthetic hazy datasets. Our method shows favorable results on these datasets without any post-processing in contrast to the traditional approach.
Highlights
Haze is usually an atmospheric phenomenon that appears due to the atmospheric absorption and scattering of light reflected by the objects in the scene
A typical camera captures the images of outdoor scenes with a tiny aperture, with the quality varying depending on the amount of light from the scene passing through the aperture
The proposed Deep-Dehaze system achieves state-of-the-art performances through the cooperation of four different feature extraction modules. (b) In this work, we propose a sequential-residual module for feature extraction that outperforms the traditional residual block, dense block, and residual-attention block. (c) The Res-U-net used in this work uses only one global residual connection
Summary
Haze is usually an atmospheric phenomenon that appears due to the atmospheric absorption and scattering of light reflected by the objects in the scene. A functional model of a hazy image fundamentally contains two components, namely, the transmission information and the scattered atmospheric light. In our Res-U-net, we utilize only one residual connection for the U-Net architecture This residual connection is a global residual connection that adds the input image with the extracted feature at the end of the network. This connection is in contrast to the widely practiced local residual U-Net architecture. A simple convolution operation upon the concatenated tensor gives us the desired features At this stage, the feature blocks have collected all of the essential features for subtraction from the input image. We compare our performance with other dehazing methods on various datasets, with the conclusions given in the final section
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have