Abstract
Morphing attack is an important security threat for automatic face recognition systems. High-quality morphed images, i.e. images without significant visual artifacts such as ghosts, noise, and blurring, exhibit higher chances of success, being able to fool both human examiners and commercial face verification algorithms. Therefore, the availability of large sets of high-quality morphs is fundamental for training and testing robust morphing attack detection algorithms. However, producing a high-quality morphed image is an expensive and time-consuming task since manual post-processing is generally required to remove the typical artifacts generated by landmark-based morphing techniques. This work describes an approach based on the Conditional Generative Adversarial Network paradigm for automated morphing artifact retouching and the use of Attention Maps to guide the generation process and limit the retouch to specific areas. In order to work with high-resolution images, the framework is applied on different facial crops, which, once processed and retouched, are accurately blended to reconstruct the whole morphed face. Specifically, we focus on four different squared face regions, i.e. the right and left eyes, the nose, and the mouth, that are frequently affected by artifacts. Several qualitative and quantitative experimental evaluations have been conducted to confirm the effectiveness of the proposal in terms of, among the others, pixel-wise metrics, identity preservation, and human observer analysis. Results confirm the feasibility and the accuracy of the proposed framework.
Highlights
The results of public evaluation campaigns [1] confirm that Face Recognition Systems (FRSs) are able to achieve impressive levels of accuracy, especially when operating in controlled scenarios
Several recent studies confirm that digital image manipulations can severely affect FRS performance: this is especially true for the socalled face morphing attack [2], where face images of two individuals, usually referred to as criminal and accomplice, are mixed to produce a new image containing facial features that belong to both subjects
The research community is devoting significant efforts to the development of Morphing Attack Detection (MAD) algorithms [4], able to discriminate between bona fide images and images generated by a morphing process
Summary
The results of public evaluation campaigns [1] confirm that Face Recognition Systems (FRSs) are able to achieve impressive levels of accuracy, especially when operating in controlled scenarios. Several recent studies confirm that digital image manipulations can severely affect FRS performance: this is especially true for the socalled face morphing attack [2], where face images of two individuals, usually referred to as criminal and accomplice, are mixed to produce a new image (morphed face) containing facial features that belong to both subjects. The availability of a number of free or commercial software for face morphing generation makes the risk even more serious For this reason, the research community is devoting significant efforts to the development of Morphing Attack Detection (MAD) algorithms [4], able to discriminate between bona fide (not manipulated) images and images generated by a morphing process
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.