Abstract

AbstractWith the rapid development of information technology, the number of songs is exploding, so the classification of music genres is a very challenging task, and at this stage, the implementation of automated classification of music genres is also a relatively popular scientific research topic. Mobile devices are all over people’s lives and have brought great convenience to people’s life and work, making it possible to work anywhere and anytime. However, the special characteristics of mobile devices require high model requirements, which are difficult to be realized by traditional models. We hope to use deep learning to automatically identify and classify music, and use the Mobilenet model to achieve lightweight music classification on mobile and improve the classification accuracy. In this paper, we mainly use Free Music Archive dataset for experiments, based on resnet101 network model and MobileNet model for music genre classification, mainly use Short Time Fourier Transform (STFT) and Mel Frequency Cepstrum Coefficient (MFCC) for music feature extraction, improve the data pre-processing, and compare with other model methods were compared, and the accuracy rate was about 7% higher than the traditional CRNN method, and better results were achieved. On the implementation of the lightweight model for mobile, the size of the parameters of the model trained by MobileNet is only 4% of the best model in this paper, and has a high accuracy rate.KeywordsResidual Network (ResNet)MobileNetDepth learningShort Fourier Transform (STFT)Mel Frequency Cepstral Coefficient (MFCC)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call