Music always had a special connection with our emotions. It’s a way to connect people around the world through pure emotion. Yet, it is extremely difficult to generalize about music and say that everyone will like the same kind of music. The mood-based recommendation is very necessary because it can help people relieve stress and listen to calming music based on their current mood. Its main purpose is to accurately predict the user’s mood, and then play songs according to the user’s choice and current mood. It uses human-computer interaction (HCI) to recognize human emotions from facial images and extracts facial features from the user’s face. Whenever a person wishes to listen to a specific kind of music, he might be ending up hearing some other kind of music that doesn’t match his mood. So, the primary objective of our project is to address the major challenges that are faced by users while trying to listen to any music at random. In this paper, two approaches are proposed to overcome the above–mentioned challenges. The first approach is implementing a questionnaire model. The user will be required to respond to a series of questions in this model, and depending on his response, the mood will be determined, and music will be suggested. While the second approach involves designing a model to identify user emotions and then suggest music depending on the mood detected. However, if the user is not satisfied with the emotion captured, he will also have the choice to return to one of the two previously described models and continue the procedure. Our project aims at developing a user interactive and user–friendly model.