Because the classification saves time in the learning process and enables this process to take place more easily, its contribution to music learning cannot be denied. One of the most valid and effective methods in music classification is music genre classification. Given the rapid progress of music production in the world and the significant increase in the number of data, the process of classifying music genres has now become too complex to be done by humans. Considering the successful results of deep neural networks in this field, the aim is to develop a deep learning algorithm that can classify 10 different music genres. To reveal the efficiency of the model by comparing it with others, we make the classification using the GTZAN dataset, which was previously used in many studies and retains its validity. In this article, we use a convolutional neural network (CNN) to classify music genres, taking into account the previous successful results. Unlike previous studies in which CNN was used as a classifier, we represent music segments in the dataset by mel frequency cepstral coefficients (MFCC) instead of using visual features or representations. We obtain MFCCs by preprocessing the music pieces in the dataset, then train a CNN model with the acquired MFCCs and determine the success of the model with the testing data. As a result of this study, we develop a model that is successful in classifying music genres by using smaller data than previous studies.
Read full abstract