SADnet: Semi-supervised Single Image Dehazing Method Based on an Attention Mechanism
Many real-life tasks such as military reconnaissance and traffic monitoring require high-quality images. However, images acquired in foggy or hazy weather pose obstacles to the implementation of these real-life tasks; consequently, image dehazing is an important research problem. To meet the requirements of practical applications, a single image dehazing algorithm has to be able to effectively process real-world hazy images with high computational efficiency. In this article, we present a fast and robust semi-supervised dehazing algorithm named SADnet for practical applications. SADnet utilizes both synthetic datasets and natural hazy images for training, so it has good generalizability for real-world hazy images. Furthermore, considering the uneven distribution of haze in the atmospheric environment, a Channel-Spatial Self-Attention (CSSA) mechanism is presented to enhance the representational power of the proposed SADnet. Extensive experimental results demonstrate that the presented approach achieves good dehazing performances and competitive running times compared with other state-of-the-art image dehazing algorithms.
- Research Article
14
- 10.3390/rs14225737
- Nov 13, 2022
- Remote Sensing
Image dehazing is crucial for improving the advanced applications on remote sensing (RS) images. However, collecting paired RS images to train the deep neural networks (DNNs) is scarcely available, and the synthetic datasets may suffer from domain-shift issues. In this paper, we propose a zero-shot RS image dehazing method based on a re-degradation haze imaging model, which directly restores the haze-free image from a single hazy image. Based on layer disentanglement, we design a dehazing framework consisting of three joint sub-modules to disentangle the hazy input image into three components: the atmospheric light, the transmission map, and the recovered haze-free image. We then generate a re-degraded hazy image by mixing up the hazy input image and the recovered haze-free image. By the proposed re-degradation haze imaging model, we theoretically demonstrate that the hazy input and the re-degraded hazy image follow a similar haze imaging model. This finding helps us to train the dehazing network in a zero-shot manner. The dehazing network is optimized to generate outputs that satisfy the relationship between the hazy input image and the re-degraded hazy image in the re-degradation haze imaging model. Therefore, given a hazy RS image, the dehazing network directly infers the haze-free image by minimizing a specific loss function. Using uniform hazy datasets, non-uniform hazy datasets, and real-world hazy images, we conducted comprehensive experiments to show that our method outperforms many state-of-the-art (SOTA) methods in processing uniform or slight/moderate non-uniform RS hazy images. In addition, evaluation on a high-level vision task (RS image road extraction) further demonstrates the effectiveness and promising performance of the proposed zero-shot dehazing method.
- Conference Article
50
- 10.1109/cvpr42600.2020.00462
- Jun 1, 2020
The quality of images captured in bad weather is often affected by chromatic casts and low visibility due to the presence of atmospheric particles. Restoration of the color balance is often ignored in most of the existing image de-hazing methods. In this paper, we propose a varicolored end-to-end image de-hazing network which restores the color balance in a given varicolored hazy image and recovers the haze-free image. The proposed network comprises of 1) Haze color correction (HCC) module and 2) Visibility improvement (VI) module. The proposed HCC module provides required attention to each color channel and generates a color balanced hazy image. While the proposed VI module processes the color balanced hazy image through novel inception attention block to recover the haze-free image. We also propose a novel approach to generate a large-scale varicolored synthetic hazy image database. An ablation study has been carried out to demonstrate the effect of different factors on the performance of the proposed network for image de-hazing. Three benchmark synthetic datasets have been used for quantitative analysis of the proposed network. Visual results on a set of real-world hazy images captured in different weather conditions demonstrate the effectiveness of the proposed approach for varicolored image de-hazing.
- Conference Article
9
- 10.1109/aipr57179.2022.10092202
- Oct 11, 2022
Image dehazing plays a major role in several vision- based applications aiming to improve image quality to obtain rich textural information. This paper proposes a methodology to retain textural information after image enhancement for vision- based algorithms. The objective is first to detect the hazy regions in a hazy input image and then perform dehazing on these detected regions. This results in retaining the texture of haze- free regions and an enhanced view of hazy regions. In the first part of the proposed framework, a Faster RCNN-based haze detection network named FR-HDNet is proposed to identify the hazy regions in an input hazy image. In the second part, the detected hazy regions are dehazed. This results in an enhanced image optimally equipped with features that could aid vision- based algorithms. Finally, the enhanced dehazed image is fed into an object detection network. The experiments to validate the performance of the proposed framework are done on several benchmarked datasets like natural hazy benchmarked images frequently used in the literature, synthetic hazy images, indoor Synthetic Objective Testing Set (SOTS) images from REalistic Single Image DEhazing (RESIDE) dataset, outdoor SOTS images from RESIDE dataset, and real-world synthetic hazy images of Hybrid Subjective Testing Set (HSTS) from RESIDE dataset. The performance measures used to evaluate the quality of dehazed images are Peak Signal to Noise Ratio (PSNR) and Structural SIMilarity (SSIM) index; and Lightness Order Error (LOE) and Naturalness Image Quality Evaluator (NIQE) as no- reference image quality metrics. The effectiveness of the proposed framework is compared with several benchmarked state-of-the- art dehazing methods. The comparison demonstrated that the proposed framework enhances image quality and results in better performance.
- Conference Article
8
- 10.1109/icmew46912.2020.9106053
- Jun 10, 2020
Recovering a clear image solely from a hazy input image is a challenging task. Moreover, a hazy image can drastically impact the performance of many subsequent high-level computer vision tasks, such as object detection and recognition. In this study, we propose a novel image dehazing method: Color Balancing and Histogram Equalization (CBHE). The method is designed with an aim to merge it with existing object detector models like Faster RCNN [1] and improve the accuracy of object detection under poor visibility. In this method, color balancing and histogram equalization along with image processing techniques have been applied for dehazing. We used the dataset from the UG2+ challenge Track 2 competition called Realistic Single Image Dehazing(RESIDE) - STANDARD that comprised of a diverse set of both synthetic and real-world images. Experimental results on both indoor and outdoor test datasets demonstrate a large improvement in the object detection performance compared to existing techniques when the dehazed image is merged with a pre-trained object detector.
- Research Article
40
- 10.1109/tetci.2020.3035407
- Nov 17, 2020
- IEEE Transactions on Emerging Topics in Computational Intelligence
Degradation in the qualityof images that are captured in the hazy environment is mainlydue to 1) different weather conditions and 2) the attenuation in reflected light. These factors introduce a severe color distortion and low visibility in the captured images. To tackle these problems, we propose an end-to-end trainable image de-hazing network named as LIGHT-Net. The proposed LIGHT-Net comprises of color constancy module and haze reduction module. Among these, the color constancy module removes the color cast added in hazy image due to the weather condition. Whereas, the proposed haze reduction module, which is build using an inception-residual block, is aimed to reduce the effect of haze as well as to improve the visibility in the hazy image. Unlike traditional feature concatenation, in the haze reduction module, we propose a dense feature sharing to effectively share the features learned at initial layers across the network. In general, a major hurdle to train a convolution neural network for haze removal task is the unavailability of large-scale real-world hazy, and corresponding haze-free image (i.e. paired data). Thus, we make use of an unpaired training approach to train the proposed LIGHT-Net for image de-hazing. Extensive analysis has been carried out to validate the necessity and impact of each sub-block of the proposed LIGHT-Net. A large set of real-world hazy images captured in different weather conditions are considered to validate the proposed approach for image de-hazing. Also, the benchmark synthetic hazy image database is considered for a quantitative analysis of the proposed LIGHTNet for image de-hazing. Further, we have shown the usefulness of the proposed LIGHT-Net for underwater image enhancement. Experiments show that the proposed LIGHT-Net outperforms the other existing approaches for both image de-hazing as well as underwater image enhancement.
- Research Article
15
- 10.1038/s41598-022-19132-5
- Sep 2, 2022
- Scientific Reports
Single image dehazing, as a key prerequisite of high-level computer vision tasks, catches more and more attentions. Traditional model-based methods recover haze-free images via atmospheric scattering model, which achieve favorable dehazing effect but endure artifacts, halos, and color distortion. By contrast, recent learning-based methods dehaze images by a model-free way, which achieve better color fidelity but tend to acquire under-dehazed results due to lacking of knowledge guiding. To combine these merits, we propose a novel online knowledge distillation network for single image dehazing named OKDNet. Specifically, the proposed OKDNet firstly preprocesses hazy images and acquires abundant shared features by a multiscale network constructed with attention guided residual dense blocks. After that, these features are sent to different branches to generate two preliminary dehazed images via supervision training: one branch acquires dehazed images via the atmospheric scattering model; another branch directly establishes the mapping relationship between hazy images and clear images, which dehazes images by a model-free way. To effectively fuse useful information from these two branches and acquire a better dehazed results, we propose an efficient feature aggregation block consisted of multiple parallel convolutions with different receptive. Moreover, we adopt a one-stage knowledge distillation strategy named online knowledge distillation to joint optimization of our OKDNet. The proposed OKDNet achieves superior performance compared with state-of-the-art methods on both synthetic and real-world images with fewer model parameters. Project website: https://github.com/lanyunwei/OKDNet.
- Research Article
6
- 10.3390/s20185300
- Sep 16, 2020
- Sensors
Single image dehazing is a difficult problem because of its ill-posed nature. Increasing attention has been paid recently as its high potential applications in many visual tasks. Although single image dehazing has made remarkable progress in recent years, they are mainly designed for haze removal in daytime. In nighttime, dehazing is more challenging where most daytime dehazing methods become invalid due to multiple scattering phenomena, and non-uniformly distributed dim ambient illumination. While a few approaches have been proposed for nighttime image dehazing, low ambient light is actually ignored. In this paper, we propose a novel unified nighttime hazy image enhancement framework to address the problems of both haze removal and illumination enhancement simultaneously. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination existing in low-light hazy conditions are considered for the first time in our approach. More importantly, most current daytime dehazing methods can be effectively incorporated into nighttime dehazing task based on our framework. Firstly, we decompose the observed hazy image into a halo layer and a scene layer to remove the influence of multiple scattering. After that, we estimate the spatially varying ambient illumination based on the Retinex theory. We then employ the classic daytime dehazing methods to recover the scene radiance. Finally, we generate the dehazing result by combining the adjusted ambient illumination and the scene radiance. Compared with various daytime dehazing methods and the state-of-the-art nighttime dehazing methods, both quantitative and qualitative experimental results on both real-world and synthetic hazy image datasets demonstrate the superiority of our framework in terms of halo mitigation, visibility improvement and color preservation.
- Conference Article
164
- 10.1109/cvprw.2018.00134
- Jun 1, 2018
This paper reviews the first challenge on image dehazing (restoration of rich details in hazy image) with focus on proposed solutions and results. The challenge had 2 tracks. Track 1 employed the indoor images (using I-HAZE dataset), while Track 2 outdoor images (using O-HAZE dataset). The hazy images have been captured in presence of real haze, generated by professional haze machines. I-HAZE dataset contains 35 scenes that correspond to indoor domestic environments, with objects with different colors and specularities. O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. The dehazing process was learnable through provided pairs of haze-free and hazy train images. Each track had ~ 120 registered participants and 21 teams competed in the final testing phase. They gauge the state-of-the-art in image dehazing.
- Research Article
4
- 10.1016/j.engappai.2024.108359
- Apr 6, 2024
- Engineering Applications of Artificial Intelligence
Photo realistic synthetic dataset and multi-scale attention dehazing network
- Research Article
3
- 10.1371/journal.pone.0253214
- Jun 28, 2021
- PLoS ONE
In water scenes, where hazy images are subject to multiple scattering and where ideal data sets are difficult to collect, many dehazing methods are not as effective as they could be. Therefore, an unsupervised water scene dehazing network using atmospheric multiple scattering model is proposed. Unlike previous image dehazing methods, our method uses the unsupervised neural network and the atmospheric multiple scattering model and solves the problem of difficult acquisition of ideal datasets and the effect of multiple scattering on the image. In our method, in order to embed the atmospheric multiple scattering model into the unsupervised dehazing network, the unsupervised dehazing network uses four branches to estimate the scene radiation layer, transmission map layer, blur kernel layer and atmospheric light layer, the hazy image is then synthesized from the four output layers, minimizing the input hazy image and the output hazy image, where the output scene radiation layer is the final dehazing image. In addition, we constructed unsupervised loss functions which applicable to image dehazing by prior knowledge, i.e., color attenuation energy loss and dark channel loss. The method has a wide range of applications, with haze being thick and variable in marine, river and lake scenes, the method can be used to assist ship vision for target detection or forward road recognition in hazy conditions. Through extensive experiments on synthetic and real-world images, the proposed method is able to recover the details, structure and texture of the water image better than five advanced dehazing methods.
- Research Article
97
- 10.1109/tits.2022.3170328
- Nov 1, 2022
- IEEE Transactions on Intelligent Transportation Systems
Image dehazing is a common operation in autonomous driving, traffic monitoring and surveillance. Learning-based image dehazing has achieved excellent performance recently. However, it is nearly impossible to capture pairs of hazy/clean images from the real world to train an image dehazing network. Most of existing dehazing models that are learnt from synthetically generated hazy images generalize poorly on real-world hazy scenarios due to the obvious domain shift. To deal with this unpaired problem arisen by real-world hazy images, we present Cycle Spectral Normalized Soft likelihood estimation Patch Generative Adversarial Network (Cycle-SNSPGAN) for image dehazing. Cycle-SNSPGAN is an unsupervised dehazing framework to boost the generalization ability on real-world hazy images. To leverage unpaired samples of real-world hazy images without relying on their clean counterparts, we design an SN-Soft-Patch GAN and exploit a new cyclic self-perceptual loss which avoids using the ground-truth image to compute the perceptual similarity. Moreover, a significant color loss is adopted to brighten the dehazed images as human expects. Both visual and numerical results show clear improvements of the proposed Cycle-SNSPGAN over state-of-the-arts in terms of hazy-robustness and image detail recovery, with even only a small dataset training our Cycle-SNSPGAN. Code has been available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/yz-wang/Cycle-SNSPGAN</uri> .
- Research Article
3
- 10.1109/tvcg.2022.3233900
- Jul 1, 2024
- IEEE transactions on visualization and computer graphics
The haze in a scenario may affect the 360 photo/video quality and the immersive 360 ° virtual reality (VR) experience. The recent single image dehazing methods, to date, have been only focused on plane images. In this work, we propose a novel neural network pipeline for single omnidirectional image dehazing. To create the pipeline, we build the first hazy omnidirectional image dataset, which contains both synthetic and real-world samples. Then, we propose a new stripe sensitive convolution (SSConv) to handle the distortion problems due to the equirectangular projections. The SSConv calibrates distortion in two steps: 1) extracting features using different rectangular filters and, 2) learning to select the optimal features by a weighting of the feature stripes (a series of rows in the feature maps). Subsequently, using SSConv, we design an end-to-end network that jointly learns haze removal and depth estimation from a single omnidirectional image. The estimated depth map is leveraged as the intermediate representation and provides global context and geometric information to the dehazing module. Extensive experiments on challenging synthetic and real-world omnidirectional image datasets demonstrate the effectiveness of SSConv, and our network attains superior dehazing performance. The experiments on practical applications also demonstrate that our method can significantly improve the 3-D object detection and 3-D layout performances for hazy omnidirectional images.
- Research Article
62
- 10.1109/tits.2022.3225797
- Mar 1, 2023
- IEEE Transactions on Intelligent Transportation Systems
Visibility issues in intelligent transportation systems are exacerbated by bad weather conditions such as fog and haze. It has been observed from recent studies that major road accidents have occurred in the world due to low visibility and inclement weather conditions. Single image dehazing attempts to restore a haze-free image from an unconstrained hazy image. We proposed a dehazing method by cascading two models utilizing a novel parameter-adaptive dual-channel modified simplified pulse coupled neural network (PA-DC-MSPCNN). The first model uses a new color channel for removing haze from images. The second model is the improved brightness preserving model (I-GIHE), which retains the brightness of the image while improving the gradient strength. To integrate the results from these two models and provide a pleasing haze-free image, a PA-DC-MSPCNN-based fusion is used. Furthermore, the proposed approach is deployed on a Xilinx Zynq SoC by exploiting the recently released PYNQ platform. The dehazing system runs on a PYNQ-Z2 all-programmable SoC platform, where it will input the camera feed through the FPGA unit and carry out the dehazing algorithm in the ARM core. This configuration has allowed reaching real-time processing speed for image dehazing. The results of dehazing are analyzed using both synthetic and real-world hazy images. Synthetic hazy images are acquired from the O-HAZE, I-HAZE, SOTS, and FRIDA datasets, while real-world hazy images are taken from the RailSem19, E-TUVD dataset, and the internet. For evaluation, twelve cutting-edge approaches are chosen. The proposed method is also analyzed on underwater and low-light images. Extensive experiments indicate that the proposed method outperforms state-of-the-art methods of qualitative and quantitative performances.
- Research Article
43
- 10.1016/j.imavis.2023.104747
- Jun 28, 2023
- Image and Vision Computing
Single image dehazing using extended local dark channel prior
- Research Article
69
- 10.1109/tcsvt.2021.3097713
- May 1, 2022
- IEEE Transactions on Circuits and Systems for Video Technology
In recent years, learning-based single image dehazing networks have been comprehensively developed. However, performance improvement is limited due to domain shift between trained synthetic hazy images and untrained real-world hazy images. To alleviate this issue, this paper proposes a real-world dehazing targeted training scheme which nearly realizes paired real-world data training. As a result, a Twofold Multi-scale Generative Adversarial Network (TMS-GAN) consisting of a Haze-generation GAN (HgGAN) and a Haze-removal GAN (HrGAN) is designed. HgGAN attributes real haze properties to synthetic images and HrGAN removes haze from both synthetic and generated fake realistic data under supervision. Thus, the proposed method can better adapt to real-world image dehazing using this cooperative training scheme. Meanwhile, several structural advances of TMS-GAN also improve dehazing performance. Specifically, a haze residual map based on atmospheric scattering model is deduced in HgGAN for fake realistic data generation. The dual-branch generator in HrGAN draws attention to detail restoration by one branch along with another color-branch. A plug-and-play Multi-attention Progressive Fusion Module (MAPFM) is proposed and inserted in both HgGAN and HrGAN. MAPFM incorporates multi-attention mechanism to guide multi-scale feature fusion in a progressive manner, in which Adjacency-attention Block (AAB) can capture contributing features of each level and Self-attention Block (SAB) can establish non-local dependency of feature fusion. Experiments on mainstream benchmarks show that the proposed framework is superior especially on real-world hazy images among single image dehazing methods.