Music holds significant sway in enriching the lives of individuals, serving as a vital source of entertainment for enthusiasts and listeners alike. Moreover, it transcends mere amusement, often adopting a therapeutic role in people’s lives. In the ever-evolving landscape of music and technology, this project emerges as a groundbreaking endeavor, driven by the profound impact music holds on individuals’ lives. Leveraging technological advancements in music players, such as playback control and genre classification, our focus is on revolutionizing playlist creation. Instead of the laborious manual curation of playlists, we introduce automation based on users’ emotional states, identified through real-time facial expression analysis via a camera. The human face, a rich source of mood indicators, becomes the key input for our system. By directly extracting emotional cues from facial expressions, the project aims to swiftly deduce the user’s emotional state, crafting a tailored playlist without the need for time-consuming manual efforts. Implemented through deep learning using VGG16 model, the system ensures intricate emotion recognition from image input. Python, OpenCV, and Keras facilitate seamless video processing and deep learning functionalities, complemented by a music player library for smooth playback control. This amalgamation of computer vision and deep learning delivers an interactive music player that dynamically selects tracks aligned with users’ real-time emotional expressions, offering a personalized and immersive musical experience.
Read full abstract