Abstract

Music is a fantastic way for people to express themselves as well as a good source of enjoyment for music fans and listeners. Furthermore, relaxing music is an effective technique for evoking strong emotions and sending a quiet message. With technological advancements, the number of artists, their music, and music listeners is growing, which brings up the issue of manually exploring and picking music. This study offers a system that uses facial expressions at real time of a user to assess the user’s mood (Emotion detection Model), output of which is then combined with mapped music from the music dataset to create a user-specific music playlist (music recommendation model). Convolutional neural network is used to classify the users emotions in 7 different categories with an accuracy rate of 94 percent, thus satisfying the actual aim of the study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call