In today’s music streaming landscape, personalized song recommendations are crucial for enhancing user experiences. This research study combines heart rate monitoring and facial expression detection to present a novel deep learning method for song suggestion. With a 92% accuracy rate, a Convolutional Neural Network (CNN) model is used to identify emotional states from facial expressions, and heart rate information adds more context to the user's emotional intensity. The large music library is matched to these emotional cues, guaranteeing that suggested songs correspond with the user's present emotional state. This approach provides a really interesting, emotionally stirring, and contextually appropriate music finding experience.
Read full abstract