Abstract

The daily lives of people are greatly influenced by music. Everyone wants to listen to music that suits their personal tastes and mood. Users are constantly needed to manually browse the music and construct a playlist based on their mood. The suggested project, which creates a music playlist depending on users' current moods, is quite effective. The most effective approach to convey a person's current mood is through their facial expressions. Webcams are used to capture facial expressions, which are then fed into a learning algorithm to determine the most likely emotion. The emotion recognition system is built using the FER 2013 dataset for training, which allows it to identify seven different emotions. Its operation involves capturing live video input from a webcam, processing it through the model, and then providing a prediction of the detected emotion. By employing webcams to capture real-time facial expressions, the system dynamically adapts playlist recommendations to match the user's emotional state. This seamless integration of emotion recognition technology with music selection enhances user experience by ensuring that the music aligns with their current feelings, thereby fostering a more immersive and personalized listening experience.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.