Abstract

Speech and speaker recognition systems aim to analyze parametric information contained in the human voice and recognize it at the highest possible rate. One of the most important features in the audio signal for the speaker to be recognized successfully by the system is the speaker's accent. Speaker accent recognition systems are based on the analysis of patterns such as the way the speaker speaks and the word choice he uses while speaking. In this study, the data obtained by the MFCC feature extraction technique from voice signals of 367 speakers with 7 different accents were used. The data of 330 speakers in the data set were taken from the "Speaker Accent Recognition" data set in the UC Irvine Machine Learning (ML) open data source. The data of the other 37 speakers were obtained by converting the voice recordings in the "Speaker Accent Archive" data set created by George Mason University into data using the MFCC feature extraction technique. 9 ML classification algorithms were used for the designed speaker accent recognition system. Also, the k-fold cross-validation technique was used to test the data set independently. In this way, the performance of ML algorithms is shown when the data set is divided into a k number of parts. Information about the classification algorithms used in the designed system and the hyperparameter optimizations made in these algorithms are also given. The success performances of the classification algorithms are shown with performance metrics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.