Abstract

Multibiometric system employs two or more behavioral or physical information from a person’s traits for the verification and identification processes. Many researches have proved that multibiometric system can improve the performances of single biometric system. In this study, three types of fusion levels i.e feature level fusion, score level fusion and decision level fusion have been tested. Feature level fusion involves feature concatenation of the features from two modalities before the pattern matching process while score level fusion is executed by calculating the mean score from both biometrics scores produced after the pattern matching. Finally, for the decision level fusion, the logic AND and OR are performed on the final decision of the two modalities. In this study, speech signal is used as a biometric trait to the biometric verification system while lipreading image is used as a second modality to assist the performance of the single modal system. For speech signal, Mel Frequency Ceptral Coefficient (MFCC) is used as speech features while region of interest (ROI) of lipreading is used as visual features. Consequently, support vector machine (SVM) is executed as classifier. Performances of the systems for each fusion level is compared based on accuracy percentage and Receiver Operation Characteristic (ROC) curve by plotting Genuine Acceptance Rate (GAR) versus False Acceptance Rate (FAR. Experimental results show that score level fusion performance is the most outstanding method followed by feature level fusion and finally the decision level fusion. The accuracy percentages using 20 training data are observed as 99.9488%, 99.7534% and 99.6639% for the score level fusion, feature level fusion and decision level fusion, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call