Abstract

Facial recognition has been shown to have different accuracy for different demographic groups. When setting a threshold to achieve a specific False Match Rate (FMR) on a mixed demographic impostor distribution, some demographic groups can experience a significantly worse FMR. To mitigate this, some authors have proposed to use demographic-specific thresholds. However, this can be impractical in an operational scenario, as it would either require users to report their demographic group or the system to predict the demographic group of each user. Both of these options can be deemed controversial since the demographic group is a sensitive attribute. Further, this approach requires listing the possible demographic groups, which can become controversial in itself. We show that a similar mitigation effect can be achieved using non-sensitive predicted soft-biometric attributes. These attributes are based on the appearance of the users (such as hairstyle, accessories, and facial geometry) rather than how the users self-identify. Our experiments use a set of 38 binary non-sensitive attributes from the MAAD-Face dataset. We report results on the Balanced Faces in the Wild dataset, which has a balanced number of identities by race and gender. We compare clustering-based and decision-tree-based strategies for selecting thresholds. We show that the proposed strategies can reduce differential outcomes in intersectional groups twice as effectively as using gender-specific thresholds and, in some cases, are also better than using race-specific thresholds.

Highlights

  • R ECENT studies have pointed to potential demographic biases in facial analysis [1]–[4] and facial recognition [1], [5]–[8]

  • RELATED WORK we review the state of the art in two fields: studying the effects of soft-biometric attributes on facial verification (FV) and the efforts to achieve fairness in facial recognition

  • In our work, the global threshold is presented as a baseline in the presented results since it is the standard approach in Facial Verification (FV)

Read more

Summary

Introduction

R ECENT studies have pointed to potential demographic biases in facial analysis [1]–[4] and facial recognition [1], [5]–[8]. In 2020, the Association for Computing Machinery (ACM) called for a suspension of facial recognition technologies as they produce “(...) results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems” [9]. The central concern is typically that different demographic groups experience different false match rates. This has become a concern in Facial Verification (FV), which consists in validating a person’s identity by comparing their captured biometric information with a biometric template stored in the system database [44]. False matches are of particular concern because they can lead to unnecessary encounters with law enforcement.

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.