Abstract

This research study focuses on the classification that may be accomplished through the detection of human facial expression using a Convolutional Neural Network (CNN). The network is able to classify emotions and play music based on the user's identified facial expression. The suggested approach successfully and implicitly classifies the expression into happy, sad, angry, disgusted, fear, and neutral by leveraging the Convolutional Neural Network architecture. By playing music that is customized to the user's current mood, the smart music player has the potential to enhance the listening experience. The player might also be used to assist people in controlling their emotions by playing music that, depending on how they are feeling at the time, can help them unwind, feel joyful, or reduce tension. In order to play the appropriate songs from a remote database depending on the user's mood, the smart music player primarily employs the system camera to detect the user's facial expression. The database will play a random song from the happy playlist if the system determines that the user is in a happy mood. This process is repeated for the other five emotions. Key Words: Facial expression, Emotion Detection, Convolutional Neural Networks, Different Playlists.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call