Abstract
Image dehazing is still an open research topic that has been undergoing a lot of development, especially with the renewed interest in machine learning-based methods. A major challenge of the existing dehazing methods is the estimation of transmittance, which is the key element of haze-affected imaging models. Conventional methods are based on a set of assumptions that reduce the solution search space. However, the multiplication of these assumptions tends to restrict the solutions to particular cases that cannot account for the reality of the observed image. In this paper we reduce the number of simplified hypotheses in order to attain a more plausible and realistic solution by exploiting a priori knowledge of the ground truth in the proposed method. The proposed method relies on pixel information between the ground truth and haze image to reduce these assumptions. This is achieved by using ground truth and haze image to find the geometric-pixel information through a guided Convolution Neural Networks (CNNs) with a Parallax Attention Mechanism (PAM). It uses the differential pixel-based variance in order to estimate transmittance. The pixel variance uses local and global patches between the assumed ground truth and haze image to refine the transmission map. The transmission map is also improved based on improved Markov random field (MRF) energy functions. We used different images to test the proposed algorithm. The entropy value of the proposed method was 7.43 and 7.39, a percent increase of and , respectively, compared to the best existing results. The increment is similar in other performance quality metrics and this validate its superiority compared to other existing methods in terms of key image quality evaluation metrics. The proposed approach's drawback, an over-reliance on real ground truth images, is also investigated. The proposed method show more details hence yields better images than those from the existing state-of-the-art-methods.
Highlights
Images acquired in an outdoor environment are sometimes affected by degradation due to atmospheric conditions such as fog, rain, snow, or wind-blown sand
The proposed method faces a challenge: some regions of the dehazed image tend to blur instances where the ground truth is assumed since this method relies on the actual ground truth
We propose to solve the dehazing problem using a combination of Convolution Neural Networks (CNNs) with Parallax Attention Mechanism (PAM) via graph-cut algorithms
Summary
Images acquired in an outdoor environment are sometimes affected by degradation due to atmospheric conditions such as fog, rain, snow, or wind-blown sand. Such haze is a type of degradation that affects the image quality more or less homogeneously and persistently, making the visibility of details very difficult. Weather conditions can cause fluctuations in the particles that in turn causes the haze in the atmosphere [2] These particles’ collective effect arises due to the illumination effect in the image at any given pixel. These effects can be dynamic (snow or rain) or steady (haze, mist, and fog) [2]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.