Abstract

The popularity of the Internet has brought the rapid development of artificial intelligence, affective computing, Internet of things (IoT), and other technologies. Particularly, the development of IoT provides more references for the realization of smart home. However, when people have achieved a certain amount of material satisfaction, they are more likely to want to communicate emotionally. Music contains a lot of emotion information. Music data is an important communication way between people and a better way to convey emotions. Therefore, it has become one of the most convenient and natural interactive ways expected by people in intelligent human-computer interaction. Traditional music emotion recognition methods have some demerits such as low recognition rate and time-consuming. So, we propose a generative adversarial network (GAN) model based on intelligent data analytics for music emotion recognition under IoT. Driven by the double-channel fusion strategy, the GAN can effectively extract the local and global features of the image or voice. Meanwhile, in order to increase the feature difference between the emotional voices, the feature data matrix of the Meyer frequency cepstrum coefficient of the music signals is transformed to improve the expression ability of the GAN. The experiment results show that the proposed model can effectively recognize the music emotion. Compared with other state-of-the-art approaches, the error recognition rate of proposed music music data recognition is greatly reduced. In terms of the accuracy, it exceeds 87% which is higher than that of other methods.

Highlights

  • Academic Editor: Anand Nayyar e popularity of the Internet has brought the rapid development of artificial intelligence, affective computing, Internet of things (IoT), and other technologies

  • We propose a generative adversarial network (GAN) model based on intelligent data analytics for music emotion recognition under IoT

  • Music emotion recognition refers to the high-level effective emotional state recognition from low-level features of music, which can be regarded as a classification problem based on music sequence. e main processes include the emotion database establishment, the phonetic emotion features extraction, dimensionality reduction and features selection, and emotion classification and recognition. ere are many methods for music emotion recognition, which have achieved better effects, such as hidden Markov model (HMM) [6], artificial neural network (ANN) [7], Gaussian mixture model (GMM) [8], support vector machine (SVM) [9], K-nearest neighbor (KNN) [10], and maximum likelihood Bayesian classification [11, 12]

Read more

Summary

Research Article

A Generative Adversarial Network Model Based on Intelligent Data Analytics for Music Emotion Recognition under IoT. In order to increase the feature difference between emotional music information, this paper proposes a generative adversarial network (GAN) model via double-channel fusion strategy based on intelligent data analytics. (1) A generative adversarial network (GAN) model based on intelligent data analytics for music emotion recognition under IoT is proposed (2) Driven by the double-channel fusion strategy, the GAN can effectively extract the local and global features of the image or voice (3) in order to increase the feature difference between the emotional voices, the feature data matrix of the Meyer frequency cepstrum coefficient of the music signals is transformed to improve the expression ability of the GAN (4) e experiment results show that the proposed model can effectively recognize the music emotion e remaining of this paper is organized as follows. Erefore, in order to improve the recognition rate of music emotion, this paper proposes a GAN framework based on double-channel attention mechanism (DCGAN) and introduces two different attention models: feature attention model and channel attention model, which can capture the feature dependency in feature space and channel, respectively.

Generator backpropagation
Sad Happy Quiet Lonely Miss
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.