Abstract

Images captured in hazy weather often suffer from color distortion and texture blur due to turbid media suspended in the atmosphere. In this paper, we propose a Feature Attention Parallel Aggregation Network (FAPANet) to restore a clear image directly from the corresponding hazy input. It adopts the encoder-decoder structure while incorporating residual learning and attention mechanism. FAPANet consists of two key modules: a novel feature attention aggregation module (FAAM) and an adaptive feature fusion module (AFFM). FAAM recalibrates features by integrating channel attention and pixel attention in parallel to stimulate useful information and suppress redundant features. The shallow and deep layers of neural networks tend to characterize the low-level and high-level semantic features of images, respectively, so we introduce AFFM to fuse these two features adaptively. Meanwhile, a joint loss function, composed of L1 loss, perceptual loss, and structural similarity (SSIM) loss, is employed in the training stage for better results with more vivid colors and richer details. Comprehensive experiments on both synthetic and real-world images demonstrate the impressive performance of the proposed approach.

Highlights

  • We introduce a joint loss function that consists of L1 loss, perceptual loss, and structural similarity (SSIM) loss to train the proposed Feature Attention Parallel Aggregation Network (FAPANet), which dramatically enhances the details of the restored images

  • The proposed FAPANet is trained by minimizing a total loss function which combines the described above L1 loss, SSIM loss and perceptual loss by weight as follows: Ltotal = αL1 + βLSSIM + γLV GG

  • The quantitative experiment results show the powerful dehazing ability of FAPANet for real-world hazy images, and we believe that this strong dehazing ability benefits from the feature attention aggregation module (FAAM), Adaptive Feature Fusion Module (AFFM), and joint loss function proposed in this paper

Read more

Summary

INTRODUCTION

The performance of this kind of two-stage approach [18, 19] rapidly decreases when the intermediate variables cannot be assessed accurately For this reason, many end-to-end CNN-based methods [20, 21, 22, 23] have been proposed to remove haze, which directly learn the potential sophisticated mapping between hazy images and corresponding clear counterparts. We propose an effective neural network for single image haze removal that is inspired by the following three main ideas: Firstly, some recent dehazing methods [21, 23, 24] based on encoder-decoder structure have achieved good results, and UNet [25], as a classic encoderdecoder structure network, has an excellent performance in some other low-level image processing tasks such as image segmentation. The adequate ablation study reveals the importance of each sub-module in the network

RELATED WORK
LOSS FUNCTION
EXPERIMENTS
HAZE REMOVAL ON REAL-WORLD IMAGES
ANALYSIS AND DISCUSSIONS
Methods
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call