Abstract

The implementation of classification networks encounters a substantial decline in performance when subjected to degraded images due to factors such as blur, noise, and low resolution. Existing methods focus on addressing a specific kind of degraded images and thus cannot simultaneously adapt to multiple degradation scenarios. Besides, insufficient attention has been given to the causes of the decline. In this paper, we discuss about the reasons for the decrease and propose an Explicitly and Implicitly feature-aligned Generative Adversarial Network to guide the model to learn features that are more consistent with high-quality(HQ) images, named EIGAN. Initially, we introduce a feature matching loss to enable the model to focus on target regions as it does on high-quality images. Subsequently, we propose an adversarial loss intended to steer the model toward aligning with the feature distribution observed in high-quality images. As a result, our method demonstrates an enhancement in the classification accuracy of degraded images without introducing additional parameters. Extensive experiments across four types of degraded datasets indicate that as degradation intensifies, the advantages of our proposed method compared to other methods become notably more pronounced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call