Abstract
Human emotions can be expressed through music, including happiness, sadness, love, violence, and energy. Listening to music is accessible to everyone, everywhere, at any time, and the number of songs is growing by the day. Due to this, categorising songs according to emotion is crucial for music recommendation systems, as users may experience media overload. With the advancement of signal processing and machine learning algorithms, features can be extracted and emotions can be predicted. In this work, a comparison of different multi-class machine learning algorithms, as well as dimensional models such as Thayer and Russell, is done to find the best among them. A data set can be developed from audio features of music extracted by signal processing methods in order to perform machine learning. Different audio features can be analysed and tested in Matlab, and high-quality features useful for this work will be selected to be implemented in Python. Mood tags will be manually added to audio tracks using human input via survey, and then the audio features are utilised to develop a machine learning model, which will generate mood tags. Then tracks will be classified and can be used as part of the recommendation system application developed for the Android platform using appropriate frameworks and tools.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.