Abstract

Music plays a vital role in human life, and it is a valid therapy to potentially reduce depression, anxiety, as well as to improve mood, self-esteem, and quality of life. Music has the power to change human emotion as expressed through facial expression. It’s a difficult task to recommend music based on emotion. The existing system on emotion recognition and music recommendation is focused on depression and mental health analysis. Hence a model is proposed to recommend music based on recognition of face expression to improve or change the emotion. Face emotion recognition (FER) is implemented using YoloV5 algorithm. The output of FER is a type of emotion classified as happy, anger, sad, and neutral which is the input to music recommendation system. A Music player is created to keep track of the user’s favorite based on the emotion. If the user is new to the system, then generalized music will be suggested. The aim of the paper is to recommend music to the user according to their emotion to further improve it.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.