Abstract

The existing dehazing algorithms are problematic because of dense haze being unevenly distributed on the images, and the deep convolutional dehazing network relying too greatly on large-scale datasets. To solve these problems, this paper proposes a generative adversarial network based on the deep symmetric Encoder-Decoder architecture for removing dense haze. To restore the clear image, a four-layer down-sampling encoder is constructed to extract the semantic information lost due to the dense haze. At the same time, in the symmetric decoder module, an attention mechanism is introduced to adaptively assign weights to different pixels and channels, so as to deal with the uneven distribution of haze. Finally, the framework of the generative adversarial network is generated so that the model achieves a better training effect on small-scale datasets. The experimental results showed that the proposed dehazing network can not only effectively remove the unevenly distributed dense haze in the real scene image, but also achieve great performance in real-scene datasets with less training samples, and the evaluation indexes are better than other widely used contrast algorithms.

Highlights

  • IntroductionGenerative Adversarial Network for 2021, 14, x FOR Images collected Symmetry PEER REVIEW

  • As a result of the two problems mentioned above, this paper proposes an attention optimized deep symmetrical encoder-decoder generative adversarial network for removing uneven dense haze in real scene

  • We propose a fully end-to-end network for single image dehazing

Read more

Summary

Introduction

Generative Adversarial Network for 2021, 14, x FOR Images collected Symmetry PEER REVIEW. Academic Editors: Jan Awrejcewicz by the imaging sensor are seriously affected by the atmospheric environment such as haze, and lose a lot of contextual information. The purpose of image dehazing is to eliminate the negative impact of the atmospheric environment on image quality and increase the visibility of the image (Figure 1). It provides support parameters, and scene depth can be correctly calculated, can directly restore the hazeimage fromsuch a hazy for downstreamfree visual tasks asinput

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call