Abstract

The purpose of image dehazing is the reduction of the image degradation caused by suspended particles for supporting high-level visual tasks. Besides the atmospheric scattering model, convolutional neural network (CNN) has been used for image dehazing. However, the existing image dehazing algorithms are limited in face of unevenly distributed haze and dense haze in real-world scenes. In this paper, we propose a novel end-to-end convolutional neural network called attention enhanced serial Unet++ dehazing network (AESUnet) for single image dehazing. We attempt to build a serial Unet++ structure that adopts a serial strategy of two pruned Unet++ blocks based on residual connection. Compared with the simple Encoder–Decoder structure, the serial Unet++ module can better use the features extracted by encoders and promote contextual information fusion in different resolutions. In addition, we take some improvement measures to the Unet++ module, such as pruning, introducing the convolutional module with ResNet structure, and a residual learning strategy. Thus, the serial Unet++ module can generate more realistic images with less color distortion. Furthermore, following the serial Unet++ blocks, an attention mechanism is introduced to pay different attention to haze regions with different concentrations by learning weights in the spatial domain and channel domain. Experiments are conducted on two representative datasets: the large-scale synthetic dataset RESIDE and the small-scale real-world datasets I-HAZY and O-HAZY. The experimental results show that the proposed dehazing network is not only comparable to state-of-the-art methods for the RESIDE synthetic datasets, but also surpasses them by a very large margin for the I-HAZY and O-HAZY real-world dataset.

Highlights

  • When light spreads in dense suspended particles such as fog, haze, smoke, dust, etc., the image information collected by imaging sensors is seriously degraded due to the scattering of the particles, which causes the loss of a large amount of useful information and greatly limits high-level vision tasks

  • The methods mentioned above have significantly improved the performance of dehazed images; these generic methods suffered from the problems of complex models, unevenly distributed haze and insufficient dehazing degree after reconstruction

  • The experimental results of our method attention enhanced serial Unet++ dehazing network (AESUnet) and the other comparative methods on the RESIDE dataset are shown in Figure 5 and Table 1

Read more

Summary

Introduction

When light spreads in dense suspended particles such as fog, haze, smoke, dust, etc., the image information collected by imaging sensors is seriously degraded due to the scattering of the particles, which causes the loss of a large amount of useful information and greatly limits high-level vision tasks. The purpose of image dehazing is to eliminate the influence of the atmospheric environment on image quality, increase the visibility of images, and provide support for downstream vision tasks such as classification, localization, and self-driving systems. In the past few decades, single image dehazing has been widely used for outdoor video surveillance systems, such as highway traffic, forest, and grassland ecology. As a foundational low-level vision task, single image dehazing has gained more and more attention from the computer vision community and artificial intelligence companies over the world. Numerous image dehazing methods can be divided into traditional methods and learning-based methods in general. Traditional image dehazing algorithms are mostly based on hypothetical models, among which the atmospheric scattering model introduced in [1,2] is one of the most successful models. The atmospheric scattering model can well explain the formation of haze, it provides a theoretical basis for traditional

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call