Abstract
In music industry, each music is grouped by type, including music genre, artist identification, instrument introduction, and mood. Then came a field of research called Music Information Retrieval (MIR) which is a field of science that retrieves and processes the metadata of music files to perform the grouping. This research is based on the uniqueness of music that has its own mood implied in it. By creating a Machine Learning model using Backpropagation Neural Network (BPNN) based on the Mel Frequency Cepstral Coefficients (MFCC) input feature, it will be able to classify types of music based on mood. Grouping is carried out on four mood classes based on Thayer's model. Based on several previous studies, the use of MFCC in voice processing produces very good accuracy as well as the use of BPNN for classification, which is expected to result in better machine learning model performance. The data used in this study were obtained from the Internet with a total dataset of 200. The results obtained from this study are the classification of music mood using BPNN based on the MFCC feature capable of producing 87.67%. accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.