Abstract
Sometimes, it is very difficult for someone to determine whether a person wants to hear a particular music from the vast array of available choices. So, this paper has proposed a new concept of playing music that is based on the emotion of the user. The primary goal of the music recommendation system which is proposed in this paper is to offer customers recommendations that match their tastes. The most current view of the paper involves manually playing the jukebox, using wearable computers, or classifying based on auditory characteristics. Understanding the user's present emotional or mental state may result from analysing the user's facialexpression and emotions. One area is having a great possibility to provide the audience, with a vast variety of options that are based on their preferences and music and video. In this paper, the primary goal is to show a playlist of songs on any particular music application (YouTube/Spotify) based on each person’s mood. Several images of the user are collected at that precise moment using a camera with the user's consent. To determine a person's mood, these photos go through a thorough testing andtraining process. For this, the deep learning technique called CNN is used to categorize various emotions. After this, based on the trained model, the various emotions are categorized and based on this the music playlist is generated.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Research & Review: Machine Learning and Cloud Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.