Abstract

Music recommender system is an area of information retrieval system that suggests customized music recommendations to users based on their previous preferences and experiences with music. While existing systems often overlook the emotional state of the driver, we propose a hybrid music recommendation system - ConCollA to provide a personalized experience based on user emotions. By incorporating facial expression recognition, ConCollA accurately identifies the driver’s emotions using convolution neural network(CNN) model and suggests music tailored to their emotional state. ConCollA combines collaborative filtering, a novel content-based recommendation system named Mood Adjusted Average Similarity (MAAS), and apriori algorithm to generate personalized music recommendations. The performance of ConCollA is assessed using various evaluation parameters. The results show that proposed emotion-aware model outperforms a collaborative-based recommender system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call