Abstract

AbstractFace morphing is a technique to combine facial images of two (or more) subjects such that the result resembles both subjects. In a morphing attack, this is exploited by, e.g., applying for a passport with the morphed image. Both subjects who contributed to the morphed image can then travel using this passport. Many state-of-the-art face recognition systems are vulnerable to morphing attacks. Morphing attack detection (MAD) methods are developed to mitigate this threat. MAD methods published in literature are often trained on a limited number of or even a single dataset where all morphed faces are created using the same procedure. The resulting MAD methods work well for these specific datasets, with reported detection rates of over 99%, but their performance collapses for face morphs created using other procedures. Often even simple image manipulations, like adding noise or smoothing cause a serious degradation in performance of the MAD methods. In addition, more advanced tools exist to manipulate the face morphs, like manual retouching or morphing artifacts can be concealed by printing and scanning a photograph (as used in the passport application process in many countries). Furthermore, datasets for training and testing MAD methods are often created by morphing images from arbitrary subjects including even male-female morphs and morphs between subjects with different skin color. Although this may result in a large number of morphed faces, the created morphs are often not convincing and certainly don’t represent a best effort attack by a criminal. A far more realistic attack would include careful selection of subjects that look alike and create high quality morphs from images of these subjects using careful (manual) post-processing. In this chapter we therefore argue that for robust evaluation of MAD methods, we require datasets with morphed images created using a large number of different morphing methods, including various ways to conceal the morphing artifacts by, e.g., adding noise, smoothing, printing and scanning, various ways of pre- and post-processing, careful selection of the subjects and multiple facial datasets. We also show the sensitivity of various MAD methods to the mentioned variations and the effect of training MAD methods on multiple datasets.

Highlights

  • A morphed face image is a combination of two or more face images, created in a way that all contributing subjects are verified successfully against the morphed image

  • In order to demonstrate the impact of a number of the described factors on the performance of the Local Binary Patterns (LBP)/Support Vector Machine (SVM) morphing attack detector, we present the following experiments: 1. Within dataset performance 2

  • We noticed that often morphing attack detection methods are developed and tested using a single dataset with morphed face images

Read more

Summary

16.1 Introduction

A morphed face image is a combination of two or more face images, created in a way that all contributing subjects are verified successfully against the morphed image. The two images are combined to create attack sample M, see Fig. 16.1c. Many of the published methods for face morphing attack detection are developed and tested using a single dataset with morphed and bona fide samples and often good detection results are reported. An example is morphing attack detection based on so-called double JPEG compression detection—detection of artifacts that occur because the morphed images are created from JPEG compressed images and compressed again when they are stored Such a method will fail to detect morphed images if they are stored uncompressed. The aim of this chapter is to demonstrate evaluation of morphing attack detection methods using single datasets and cross dataset testing and sensitivity to several simple morphing disguise techniques It is based on research at the University of Twente, Netherlands, published in [18, 19].

16.2 Related Work
16.3.1 Creating Morphs
16.3.2 Datasets
16.4 Texture-Based Face Morphing Attack Detection
16.5 Morphing Disguising
16.6 Experiments and Results
16.6.1 Within Dataset Performance
16.6.2 Cross Dataset Performance
16.6.3 Mixed Dataset Performance
16.6.4 Robustness Against Additive Gaussian Noise
16.6.5 Robustness Against Scaling
16.6.6 Selection of Similar Subjects
16.7 The SOTAMD Benchmark
16.8 Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call