Abstract

Abstract: In the day to day life of a human and in the technologies that is emerging, music is incredibly significant. Conventionally, it is the work of a user to search for a song manually from the list of songs in a music player. Here, an efficient and accurate model is introduced to generate a playlist on the basis of users present emotional state and behaviour. Existing methods for automated playlist building are computationally inefficient, inaccurate, and can involve the usage of additional gear such as EEG or sensors. Speech is the most primitive and natural form of communicating feelings, emotions, and mood, and its processing is computationally intensive, time-consuming, and expensive. This suggested system uses real-time face emotion extraction as well as audio feature extraction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.