Abstract

We proposed a new approach for automatically playing music using facial emotions. Most existing approaches involve manually playing music, using wearable computing devices, or classifying them based on audio features. We used a Convolutional Neural Network foremotion detection. Pygame and Tkinter were used for music recommendations. Facial expressions were captured using a built-in camera. Feature extraction is performed on input face images to detect emotions such as happy, angry, sad, surprised, and neutral. An automatic music playlist was generated by identifying the current emotions of the user.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call