Abstract

For a particular emotion of the user, the System evaluates songs according to the qualification assessed by two factors which are: song’s relevancy to the user‘s preference, and song’s mental influence on the user’s feeling. In this proposed system, user’s emotion is not input manually by the user, but detected automatically by the machine. In order to do that, user’s facial expression data is captured from the webcam and then used as inputs for emotion detecting process. The motivation behind this system is the lack of a context-aware Music Recommendation System where automatically detected user’s mood plays the most important role as a contextual key. The need of such system is made obvious by the fact that digital music libraries are constantly expanding, which thus makes it remarkably difficult for listeners to recall a particular song matching their present mood.By training the system to recognize user’s emotional state by facial expression, it is made possible for listeners to generate a playlist which suits with their current emotion, and of which songs are rated also by the potentially mental influence on user‘s emotion. Keyword:Face recognition, Face detection, PCA, Emotion Extraction and detection, Euclidean Distance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.