Abstract

Haze significantly impacts various fields, such as autonomous driving, smart cities, and security monitoring. Deep learning has been proven effective in removing haze from images. However, obtaining pixel-aligned hazy and clear paired images in the real world can be challenging. Therefore, synthesized hazed images are often used for training deep networks. These images are typically generated based on parameters such as depth information and atmospheric scattering coefficient. However, this approach may cause the loss of important haze details, leading to color distortion or incomplete dehazed images. To address this problem, this paper proposes a method for synthesizing hazed images using a cycle generative adversarial network (CycleGAN). The CycleGAN is trained with unpaired hazy and clear images to learn the features of the hazy images. Then, the real haze features are added to clear images using the trained CycleGAN, resulting in well-pixel-aligned synthesized hazy and clear paired images that can be used for dehaze training. The results demonstrate that the dataset synthesized using this method efficiently solves the problem associated with traditional synthesized datasets. Furthermore, the dehazed images are restored using a super-resolution algorithm, enabling the obtainment of high-resolution clear images. This method has broadened the applications of deep learning in haze removal, particularly highlighting its potential in the fields of autonomous driving and smart cities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call