Abstract
Recently, convolutional neural network (CNN) based on the encoder-decoder structure have been successfully applied to image dehazing. However, these CNN based dehazing methods have two limitations: First, these dehazing models are large in size with enormous parameters, which not only consumes much GPU memory, but also is hard to train from scratch. Second, these models, which ignore the structural information at different resolutions of intermediate layers, cannot capture informative texture and edge information for dehazing by stacking more layers. In this paper, we propose a light-weight end-to-end network named the residual dense pyramid network (RDPN) to address the above problems. To exploit the structural information at different resolutions of intermediate layers fully, a new residual dense pyramid (RDP) is proposed as a building block. By introducing a dense information fusion layer and the residual learning module, the RDP can maximize the information flow and extract local features. Furthermore, the RDP further learns the structural information from intermediate layers via a multiscale pyramid fusion mechanism. To reduce the number of network parameters and to ease the training process, we use one RDP in the encoder and two RDPs in the decoder, following a multilevel pyramid pooling layer for incorporating global context features before estimating the final result. The extensive experimental results on a synthetic dataset and real-world images demonstrate that the new RDPN achieves favourable performance compared with some state-of-the-art methods, e.g., the recent densely connected pyramid dehazing network, the all-in-one dehazing network, the enhanced pix2pix dehazing network, pixel-based alpha blending, artificial multi-exposure image fusions and the genetic programming estimator, in terms of accuracy, run time and number of parameters. To be specific, RDPN outperforms all of the above methods in terms of PSNR by at least 4.25 dB. The run time of the proposed method is 0.021 s, and the number of parameters is 1,534,799, only 6% of that used by the densely connected pyramid dehazing network.
Highlights
IntroductionThe images taken on hazy days inevitably lose colour fidelity and intensity contrast, since floating particles in the atmosphere such as water droplets and dust particles absorb or scatter the light reflected
The images taken on hazy days inevitably lose colour fidelity and intensity contrast, since floating particles in the atmosphere such as water droplets and dust particles absorb or scatter the light reflectedEntropy 2019, 21, 1123; doi:10.3390/e21111123 www.mdpi.com/journal/entropyEntropy 2019, 21, 1123 from the scene object before it reaches the camera sensor
Difference from DCPDN: Inspired by the densely connected pyramid dehazing network (DCPDN), we propose a novel end-to-end residual dense pyramid network (RDPN) for image dehazing
Summary
The images taken on hazy days inevitably lose colour fidelity and intensity contrast, since floating particles in the atmosphere such as water droplets and dust particles absorb or scatter the light reflected. Entropy 2019, 21, 1123 from the scene object before it reaches the camera sensor. A corresponds to the atmospheric light, and t denotes the scene transmission map indicating the portion of light that reaches the camera sensor. Assuming that the haze is homogeneous, we can further denote the transmission map t( x ) as e− βd( x) , where d represents the scene depth and β is the scattered coefficient of the atmospheric light. Since only the observed image I is known, estimating
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.