Abstract

Human emotion is a psycho physiological state of mind that manifests as good or negative responses to external and internal stimuli. Songs have long been a popular alternative for expressing human feelings as a medium of expression. Content-based recommendation engines are used by the majority of existing music recommendation systems. However, music selection is influenced not only by past tastes or musical substance, but also by the user's current mood. This research presents a framework for learning a user's emotion using data gathered from a wearable device that is combined with galvanic skin response (GSR), photo plethysmography (PPG), and electro encephalography (EEG) physiological sensor signals, as well as data taken via camera. This information is used as a supplement to the music recommendation engine. Sensor data and face expression data can thus improve the effectiveness and accuracy of the recommendation engine.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call