Abstract
User authentication on smartphones is the key to many applications, which must satisfy both security and convenience. We propose a multi-modal face authentication system, which pushes the limit of state-of-the-art image based face recognition solutions by incorporating a new dimension of sensing modality — acoustics. It actively emits almost inaudible acoustic signals from the earpiece speaker to “illuminate” the user's face and extracts features from the echoes using a customized convolutional neural network, which are fused with sophisticated visual features extracted from state-of-the-art face recognition models, for secure face authentication. Because the echo features depend on 3D facial geometries and material, our multi-modal design is not easily spoofed by images or videos like image based face recognition systems. It does not require any special sensors thus eliminating the extra costs in solutions like FaceID. Experiments show that our design achieves comparable face recognition performance to the state-of-the-art image based face authentication, while able to block image/video spoofing.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.