Abstract
Accent similarity evaluation and accent identification are complex and challenging tasks for various applications due to the existence of variant types of native and non-native languages in the world. The absence of existing studies for the non-native and native English accent similarity evaluation and the limitation of individual feature extraction techniques for accent classifications have led us to propose a new model termed the intra-native accent feature sharing based native accent identification (NAI) framework using an English accent archive speech dataset. The NAI network was employed for non-native English accent classification, native English accent classification, and identification of native and non-native English accents. Finally, the accent similarity of native and non-native English accents was evaluated based on a delicate NAI pre-trained model. Moreover, the proposed approach has a high role in training data augmentation to overcome the challenge of a huge amount of training datasets demands of deep learning. The ordinary individual voice feature extraction with data augmentation and regularization techniques was the baseline for our work. The proposed approach boosted the accuracy of the baseline method with an average accuracy value of 3.7% -7.5% on different vigorous deep learning algorithms. The Quade test method for the performance comparison gave a 0.01 significant level (p-value) that proved that the proposed approach performed better than the baseline significantly. The model makes the rank for non-native English accents based on their similarity to native English accents and the proximity rank is Mandarin, Italian, German, French, Amharic, and Hindi.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.