Abstract
Convolutional neural networks (CNNs) have shown promise in virtual human speech emotion expression. Previous studies have utilized CNNs for speech emotion recognition and achieved good results. However, there are research gaps in avatar speech emotion expression, particularly concerning speaker characteristics, and limited available datasets. To address these issues, this paper collects and pre-processes speech data from multiple speakers, using features such as Mel Frequency Cepstral Coefficient (MFCC) and Linear Predictive Coding (LPC). A multi-channel CNN (MUC-CNN) model is designed to fuse different feature information and update model parameters using the Adam optimization algorithm. The model‘s performance is compared with classical methods like Support Vector Machine (SVM), Random Forest (RF), and k-Nearest Neighbors (k-NN) to determine its applicability and optimize its design and training process. Experimental evaluation shows that the MUC-CNN model outperforms classical methods in recognizing and expressing emotions in virtual human speech. By incorporating MFCC, LPC, and F0 features, the model‘s recognition capabilities are improved. The multi-channel architecture allows independent processing of each feature type, enhancing the model‘s discriminative aptitude. The performance of the model is influenced by the quantity of convolutional layers and kernels utilized. The outcomes highlight the effectiveness of the proposed MUC-CNN model for the recognition and expression of speech emotions in virtual human interactions. Future research can explore alternative feature information and refine the model architecture to further optimize performance. This technology has the potential to enhance user experience and interaction in various fields, including speech interaction, virtual reality, games, and education.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.