Abstract

To improve the speaker recognition rate, we propose a speaker recognition model based on the fusion of different kinds of speech features. A new type of feature aggregation methodology with a total of 18 features is proposed and includes mel frequency cepstral coefficient (MFCC), linear predictive coding (LPC), perceptual linear prediction (PLP), root mean square (RMS), centroid, and entropy features along with their delta (Δ) and delta–delta (ΔΔ) feature vectors. The proposed approach is tested on five different sizes of speech datasets, namely the NIST-2008, voxforge, ELSDSR, VCTK, and voxceleb1 speech corpora. The results are evaluated using the MATLAB classification learner application with the linear discriminant (LD), K nearest neighbor (KNN), and ensemble classifiers. For the NIST-2008 and voxforge datasets, the best SI accuracy of 96.9% and 100% and the lowest speaker verification (SV) equal error rate (EER) values of 0.2% and 0% are achieved with the LD and KNN classifiers, respectively. For the VCTK and ELSDSR datasets, the best SI accuracy of 100% and the lowest SV EER of 0% are achieved with all three classifiers using different feature-level fusion approaches, while the highest SI accuracy and lowest EER achieved on the voxceleb1 database are 90% and 4.07%, respectively, using the KNN classifier. From the experimental results, it is observed that the fusion of different features with their delta and delta–delta values shows an increase in speaker identification accuracy of 10–50%, and the EER value for SV is reduced compared to the value obtained with a single feature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call