Abstract
Recent research in biometric technologies underscores the benefits of multimodal systems that use multiple traits to enhance security by complicating the replication of samples from genuine users. To address this, we present a bimodal deep learning network (BDLN or BNet) that integrates facial and voice modalities. Voice features are extracted using the SincNet architecture, and facial image features are obtained from convolutional layers. Proposed network fuses these feature vectors using either averaging or concatenation methods. A dense connected layer then processes the combined vector to produce a dual-modal vector that encapsulates distinctive user features. This dual-modal vector, processed through a softmax activation function and another dense connected layer, is used for identification. The presented system achieved an identification accuracy of 99% and a low equal error rate (EER) of 0.13% for verification. These results, derived from the VidTimit and BIOMEX-DB datasets, highlight the effectiveness of the proposed bimodal approach in improving biometric security.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Telecommunications and Information Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.