Abstract

The human face is a crucial organ for conveying an individual's emotional state and behavior. However, manually creating a playlist based on an individual's emotional features can be a labor-intensive and time-consuming task. To automate this process, several algorithms have been proposed, but they are often slow and inaccurate. To address this, a new system is proposed that utilizes facial expression extraction to generate an appropriate playlist automatically. This system can significantly reduce the computational time and overall cost of playlist generation while increasing accuracy. The system captures facial expressions using an inbuilt camera, and the emotion detection algorithm used has an accuracy of approximately 85-90% for real-time images and 98-100% for static images. By leveraging this high level of accuracy and performance, the proposed system outperforms existing algorithms used in the literature survey. Based on the detected emotion, the system creates a playlist that matches the individual's emotional state. This novel approach offers a more efficient and accurate way to generate personalized playlists, ultimately saving time and effort for users

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call