Compared to pixel-level content loss, domain-level style loss in CycleGAN-based dehazing algorithms just imposes relatively soft constraints on the intermediate translated images, resulting in struggling to accurately model haze-free features from real hazy scenes. Furthermore, globally perceptual discriminator may misclassify real hazy images with significant scene depth variations as clean style, thereby resulting in severe haze residue. To address these issues, we propose a pseudo self-distillation based CycleGAN with enhanced local adversarial interaction for image dehazing, termed as PSD-ELGAN. On the one hand, we leverage the characteristic of CycleGAN to generate pseudo image pairs during training. Knowledge distillation is employed in this unsupervised framework to transfer the informative high-quality features from the self-reconstruction network of real clean images to the dehazing generator of paired pseudo hazy images, which effectively improves its haze-free feature representation ability without increasing network parameters. On the other hand, in the output of dehazing generator, four non-uniform image patches severely affected by residual haze are adaptively selected as input samples. The local discriminator could easily distinguish their hazy style, thereby further compelling the dehazing generator to suppress haze residues in such regions, thus enhancing its dehazing performance. Extensive experiments show that our PSD-ELGAN can achieve promising results and better generality across various datasets.
Read full abstract