Abstract

Reducing the impact of hazy images on subsequent visual information processing is a challenging problem. In this paper, combining with atmospheric scattering model, we propose an end-to-end multi-scale feature multiple parallel fusion network called MMP-Net for single image haze removal. The MMP-Net includes three components: multi-scale CNN module, residual learning module and deep parallel fusion module. 1) In multi-scale CNN module, a multi-scale convolutional neural network (CNNs) is adopted to extract different scales features from whole to local, and these features are fused multiple times in parallel. 2) In residual learning module, residual blocks are introduced to deeply learn detailed features, which can recover more image details. 3) In deep parallel fusion module, those features from residual learning module are deeply merged with the fused features from CNNs, and finally used to recover a clean haze-free image via the atmospheric scattering model. The experimental results show that on the average of three datasets (SOTS, HSTS, and D-Hazy), proposed MMP-Net improves PSNR from 20.91db to 22.21db and SSIM from 0.8720 to 0.9023 over the best state-of-the-art DehazeNet method. What’s more, MMP-Net gains the best subjective visual quality on real-world hazy images.

Highlights

  • In hazy weather, due to the fine particles suspended in the atmosphere, the outdoor image captured by the machine is scattered, which causes a decline in image quality and a dim color of the image

  • (1) We propose an end-to-end multi-scale feature multiple parallel fusion network MMP-Net, which consists of multiscale CNN module, residual learning module and deep parallel fusion module

  • Given a clean image J, random atmospheric light A ∈ [0.7, 1.0] for each channel, and the corresponding ground-truth depth map d, function t(x) = e−βd(x) is applied to synthesize transmission map first, a hazy image is generated by using the physical model in Eq(1) with randomly selected scattering coefficient β ∈ [0.6, 1.8]

Read more

Summary

Introduction

Due to the fine particles suspended in the atmosphere, the outdoor image captured by the machine is scattered, which causes a decline in image quality and a dim color of the image. This has a negative impact on human perception, and constitutes an obstacle for many computer vision tasks, such as video surveillance [1], target recognition [2], [3], image classification [4], [5] and so on. Zhu et al [10] discovered the linear relationship between scene depth, brightness and saturation of images, and proposed a color attenuation prior method

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call