Abstract
This paper proposes a new algorithm composition network from the perspective of machine learning, based on an in-depth study of related literature. At the same time, this paper examines the characteristics of music and develops a model for recognising musical emotions. Using the model's information entropy of pitch and intensity to extract the main melody track, note features are extracted from bar features. Finally, the cosine of the vector included angle is used to judge the similarity between feature vectors of several adjacent sections, allowing the music to be divided into several independent segments. The emotional model of music is used to analyze each segment's emotion. By quantifying music features, this paper classifies and quantifies music emotion based on the mapping relationship between music features and emotion. Music emotion can be accurately identified by the model. The model's emotion recognition accuracy is up to 93.78 percent, and the algorithm's recall rate is up to 96.3 percent, according to simulation results. The recognition method used in this paper has a higher recognition ability than other methods, and the emotion recognition result is more reliable. This paper can not only meet the composer's auxiliary creative needs, but it can also help intelligent music services.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.