Abstract

Speech signals are being used as a primary input source in human–computer interaction (HCI) to develop several applications, such as automatic speech recognition (ASR), speech emotion recognition (SER), gender, and age recognition. Classifying speakers according to their age and gender is a challenging task in speech processing owing to the disability of the current methods of extracting salient high-level speech features and classification models. To address these problems, we introduce a novel end-to-end age and gender recognition convolutional neural network (CNN) with a specially designed multi-attention module (MAM) from speech signals. Our proposed model uses MAM to extract spatial and temporal salient features from the input data effectively. The MAM mechanism uses a rectangular shape filter as a kernel in convolution layers and comprises two separate time and frequency attention mechanisms. The time attention branch learns to detect temporal cues, whereas the frequency attention module extracts the most relevant features to the target by focusing on the spatial frequency features. The combination of the two extracted spatial and temporal features complements one another and provide high performance in terms of age and gender classification. The proposed age and gender classification system was tested using the Common Voice and locally developed Korean speech recognition datasets. Our suggested model achieved 96%, 73%, and 76% accuracy scores for gender, age, and age-gender classification, respectively, using the Common Voice dataset. The Korean speech recognition dataset results were 97%, 97%, and 90% for gender, age, and age-gender recognition, respectively. The prediction performance of our proposed model, which was obtained in the experiments, demonstrated the superiority and robustness of the tasks regarding age, gender, and age-gender recognition from speech signals.

Highlights

  • Human speech is one of the most used sources of communication among mankind.A speech signal consists of information regarding the content of speech and emotions, age, gender, and speaker identity

  • We developed and tested three different convolutional neural network (CNN) models for age, gender, and age-gender classification problems to analyze and represent the effectiveness of the multi-attention module (MAM) module when it was placed in various parts of the CNN model

  • To the best of our of our knowledge, this is the first CNN model with a MAM mechanism proposed for knowledge, this is the first CNN model with a MAM mechanism proposed for age and age and gender classification using speech spectrograms

Read more

Summary

Introduction

A speech signal consists of information regarding the content of speech and emotions, age, gender, and speaker identity. The speech signal is being used as a primary input source for several applications, such as automatic speech recognition (ASR) [1], speech emotion recognition (SER) [2], gender recognition, and age estimation [3,4]. Automatically extracting the age, gender, and emotional state of a speaker from speech signals has recently become an emerging field of study. Proper and efficient extraction of a speaker identity from speech signal leads to building applications such as advertisements based on customer age and gender, a caller-agent pairing that appropriately assigns agents depending on the caller identity in call centers. Age recognition helps the systems that are operated using the speaker’s voice command to adapt to the user and provide a more natural human–machine interaction

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call