This paper tackles the shortcomings of image evaluation metrics in evaluating facial image quality. Conventional metrics do neither accurately reflect the unique attributes of facial images nor correspond with human visual perception. To address these issues, we introduce a novel metric designed specifically for faces, utilizing a learning-based adversarial framework. This framework comprises a generator for simulating face restoration and a discriminator for quality evaluation. Drawing inspiration from facial neuroscience studies, our metric emphasizes the importance of primary facial features, acknowledging that minor changes in the eyes, nose, and mouth can significantly impact perception. Another key limitation of existing image evaluation metrics is their focus on numerical values at the image level, without providing insight into how different areas of the image contribute to the overall assessment. Our proposed metric offers interpretability regarding how each region of the image is evaluated. Comprehensive experimental results confirm that our face-specific metric surpasses traditional general image quality assessment metrics for facial images, including both full-reference and no-reference methods. The code and models are available at https://github.com/AIM-SKKU/IFQA.
Read full abstract