The daily lives of people are greatly influenced by music. Everyone wants to listen to music that suits their personal tastes and mood. Users are constantly needed to manually browse the music and construct a playlist based on their mood. The suggested project, which creates a music playlist depending on users' current moods, is quite effective. The most effective approach to convey a person's current mood is through their facial expressions. Webcams are used to capture facial expressions, which are then fed into a learning algorithm to determine the most likely emotion. The emotion recognition system is built using the FER 2013 dataset for training, which allows it to identify seven different emotions. Its operation involves capturing live video input from a webcam, processing it through the model, and then providing a prediction of the detected emotion. By employing webcams to capture real-time facial expressions, the system dynamically adapts playlist recommendations to match the user's emotional state. This seamless integration of emotion recognition technology with music selection enhances user experience by ensuring that the music aligns with their current feelings, thereby fostering a more immersive and personalized listening experience.