Abstract
Music genre classification focus on efficiently finding expected music with a similar genre through numerous melodies, which could better satisfy the tastes and expectations of the users when listening to music. This paper proposes a new method to classify different kinds of music with Artificial Neural networks (ANN) and Convolutional Neural Networks (CNNs). First, Mel Frequency Cepstral Coefficients (MFCC) are used to preprocess the Mel-frequency cepstrum (MFC). Then, we upgrade Anupam’s CNN model. Since the extracted features only by MFC are not suitable for CNN to learn as a small dataset like this. Multiple features are then exacted for each audio file. The two most correlated features on the datasets are adopted as the input of an ANN. To verify the proposed method’s effectiveness, we compare our method with other state-of-the-art methods on the GTZAN dataset. The experimental results show that we can get higher accuracy compared to Anupam. If using only one MFCC feature, Conv-Conv-Pool, a sub-structure that we add two convolutional layers before each max pooling layer, performs better than Conv-Pool, and Conv-Pool performs better than ANN. However, by concatenating another correlated feature, spectral centroid means, which is a measure used in digital signal processing to characterize a spectrum, a simple ANN can have much higher accuracy than the one utilizing only a single MFCC feature with an accuracy of about 94.1%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.