Abstract

In this paper, we propose a deep convolutional network for single image dehazing based on derived image fusion strategy. Instead of estimating the transmission map and atmospheric light as previously performed, we directly generate a haze-free image by the proposed end-to-end trainable neural network. We derive five maps from the original hazy image based on the characteristics of the hazy scene to improve the dehazing performance. First, the exposure map (EM) and saliency map (SM) complement each other to focus on details in far-away and near-region scenes. Second, the white balance map (BM) and gamma correction map (GM) are employed to recover the latent colour and intensity components of the scene. Finally, the haze veil map (VM) is introduced to enhance the global image contrast. To efficiently blend the five derived maps, we propose a U-shaped deep convolutional network consisting of encoder and decoder layers to generate a haze-free image. The convolutional layers transferred from the pretrained ResNet50 are used as encoder layers for hierarchical feature extraction. Two efficient blocks, named the cascaded residual block and the channel compression block, are proposed in the network for better dehazing performance. The final dehazed result is generated by combining the significant features of the different derived maps. Additionally, perceptual loss is introduced for better visual quality. The experimental results for both synthetic and natural hazy images demonstrate that our algorithm performs comparably or even better than state-of-the-art methods in terms of the peak signal-to-noise ratio (PSNR), structure similarity (SSIM) and visual quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call