Abstract

In the current era of media and technology, music information retrieval techniques have made progress in the past years but the music recommendation system is still at a rudimentary stage. However personalized music recommendation is quotidian, but recommending songs based on emotions is still an uphill battle. Music has greatly influenced the human brain and helps dispense an exhilarating and frivolous state of mind because it helps us work more effectively. Recommending songs based on emotions will comfort the listener by suggesting music in keeping with the listeners' pervading mental and physical state. Hence, Natural Language Processing and Deep Learning technologies made it possible for machines to read and interpret emotions through texts by recognizing patterns and finding correlations. In this paper, various deep learning models such as Long Short-Term Memory (LSTM), Convolution Neural network (CNN), CNN-LSTM, and LSTM-CNN Architectures were collated for detecting emotions such as angry, happy, love, and sad, the best model was integrated into the application. To enhance the application, a CNN model was used to detect emotions through facial expressions. The application takes text input from the user or a facial expression input. Depending upon the emotion detected, it recommends the user songs and playlists.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.