Abstract

Detecting emotion features in a song remains as a challenge in various area of research especially in Music Emotion Classification (MEC). In order to classify selected song with certain mood or emotion, the algorithms of the machine learning must be intelligent enough to learn the data features as to match the features accordingly to the accurate emotion. Until now, there were only few studies on MEC that exploit audio timbre features from vocal part of the song incorporated with the instrumental part of a song. Timbre features is the quality of a musical features or sound that distinguishes different types of sound production in human voices and musical instruments such as string instruments, wind instruments and percussion instruments. Most of existing works in MEC are done by looking at audio, lyrics, social tags or combination of two or more classes. The question is does exploitation of both timbre features from both vocal and instrumental sound features helped in producing positive result in MEC? Thus, this research present works on detecting emotion features in Malay popular music using artificial neural network by extracting audio timbre features from both vocal and instrumental sound clips. The findings of this research will collectively improve MEC based on the manipulation of vocal and instrumental sound timbre features, as well as contributing towards the literature of music information retrieval, affective computing and psychology.

Highlights

  • Detecting emotion features in a song remains as a challenge in various area of research especially in Music Emotion Classification (MEC)

  • The question is does exploitation of both timbre features from both vocal and instrumental sound features helped in producing positive result in MEC? this research present works on detecting emotion features in Malay popular music using artificial neural network by extracting audio timbre features from both vocal and instrumental sound clips

  • The accuracy of the classification result can be measured by dividing number of correctly classified songs with the total number of songs. This performance measurement is based on the evaluation taken from Music Information Retrieval Evaluation Exchange (MIREX), as done in work by Beveridge and Knox (2012). 30 songs that were categorized as happy, anger, calm and sad, were used to test the algorithm

Read more

Summary

Introduction

Detecting emotion features in a song remains as a challenge in various area of research especially in Music Emotion Classification (MEC). There were only few studies on MEC that exploit audio timbre features from vocal part of the song incorporated with the instrumental part of a song. The question is does exploitation of both timbre features from both vocal and instrumental sound features helped in producing positive result in MEC? This research present works on detecting emotion features in Malay popular music using artificial neural network by extracting audio timbre features from both vocal and instrumental sound clips. The findings of this research will collectively improve MEC based on the manipulation of vocal and instrumental sound timbre features, as well as contributing towards the literature of music information retrieval, affective computing and psychology. Vocal or instrumental sounds (or both) combined in such a way as to produce beauty of form, harmony and expression of emotion defines the music

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call