A common preprocessing step in vision-based applications is image dehazing, which has drawbacks such color distortion and detail loss. Conventional techniques frequently fall short of sufficiently retrieving sharp images from foggy situations. In order to improve single-image dehazing, this paper presents a unique method that combines Self-Attention Generative Adversarial Networks (SAGANs) with Cycle-Consistent Generative Adversarial Networks (CycleGAN). In order to capture long-range dependencies and enhance the perceptual quality of the dehazed images, our method includes self-attention processes from SAGANs and makes use of CycleGAN's strengths for efficient image-to-image translation without the need for paired training samples. We present a dual-framework design in which SAGANs improve textural and contextual details by focusing on relevant characteristics throughout the whole image space, while CycleGAN assures content fidelity through cycle consistency. We show through our studies that this integrated strategy achieves higher scores in common metrics like Peak Signal-to-Noise Ratio (PSNR) of 34.146 dB and Structural Similarity Index (SSIM) of 0.964, greatly outperforming both qualitative and quantitative assessments of previous methods. Together with establishing a new standard for single-image dehazing, this work shows how effective hybrid generative models can be when handling challenging image processing problems.
Read full abstract