Abstract

This paper first compares performances of two authentication methods using ear images, in which feature vectors are extracted by either principal component analysis (PCA) or independent component analysis (ICA). Next, the effectiveness of combining PCA- and ICA-based ear authentication methods is investigated. In our previous work, we proposed an audio-visual person authentication using speech and ear images with the aim of increasing noise robustness in mobile environments. In this paper, we apply the best ear authentication method to our audio-visual authentication method and examine its robustness. Experiments were conducted using an audio-visual database collected from 36 male speakers in five sessions over a half year. Speech data were contaminated with white noise at various SNR conditions. Experimental results show that: (1) PCA outperforms ICA in the ear authentication framework using GMMs; (2) the fusion of PCA- and ICA-based ear authentication is effective; and (3) by combining the fusion method for ear images with the speech-based method, person authentication performance can be improved. The audio-visual person authentication method achieves better performance than ear-based as well as speech-based methods in an SNR range between 15 and 30dB.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.