Abstract
This research is aiming to enhance the user experience in music consumption by incorporating real-time facial emotion analysis. Emotions play a fundamental role in shaping individual preferences, and leveraging facial expressions as a means of understanding user’s emotional states can significantly contribute to personalized music recommendations. Our proposed system begins by capturing real-time facial expressions using a webcam or analyzing static images. These facial expressions are then processed through a CNN-based emotion recognition model trained to classify emotions such as happiness, sadness, anger, and more. The CNN model extracts high-level features from facial images, enabling accurate emotion recognition. Using the detected emotional state as input, our system employs a recommendation algorithm tailored to the user's current emotional state to suggest relevant music or videos from YouTube.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have