Abstract

Music plays a significant role in evoking human emotions. Thanks to the quick proliferation of smartphones and mobile internet, music streaming applications and websites have made the music emotion recognition task even more active and exciting. However, music emotion recognition faces significant challenges too. These include inaccessibility of data, unavailability of large data volume, and the lack of other emotionally relevant features. While emotionally relevant features can be identified by analyzing lyrics and audio signals, the availability of datasets annotated with a lyrical emotion remains a constant challenge. This study uses the Music4All dataset to evaluate the lyrical features relevant for the identification of four important human emotions - happy, angry, relaxed, and sad. This was done with the help of several machine learning algorithms based on a semantic psychometric model. A transfer learning approach was also used to understand the feelings of the lyrics from an in-domain dataset and then predict the emotion of the target dataset. Further, it was observed that the BERT model improves the overall accuracy of the model (92%). A simple lyrics recommender system is also built using the Sentence Transformer model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call