Abstract

Every piece of music contains emotion in every sound presented. Detection of the music emotion is quite difficult to do because the emotions felt are subjective. Based on this problem, it is necessary to have an automatic classification system to detect the emotions produced in music. In this paper, an explanation of the result to develop an emotional classification system of instrumental music. This system described the process starting with the receiving an input in the form of a music file in the format wav. Furthermore, the feature extraction process is carried out using Mel-Frequency Cepstral Coefficients (MFCC). The result of the extraction of such features are used in the classification process using the K-Nearest Neighbor (K-NN). The system produced output in the form of happy, relaxed, and sad emotions. The output of the system has a classification achieved an accuracy of 97.5% for the value of k = 1, reaching an accuracy of 95% for the value of k = 2.95% and for k = 3, reaching an accuracy of up to 90%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.