Abstract

In recent years, with the development of Internet and digital audio technology, music information retrieval has gradually become a research hotspot. Due to the rise of deep learning and machine learning in recent years, as well as the rapid improvement of computer software and hardware performance, it has laid a good foundation for identifying different genres of music. Among them, the application of minority music style recognition is also an important research direction. At present, the application performance of minority music style recognition based on deep convolution loop neural network is poor. Because convolution loop neural networks (CNNs) have strong ability to capture information features, this paper uses CNN to extract various features from music signals and classify them. Firstly, the original music signal spectrum is separated into time characteristic harmonic component and frequency characteristic impact component by using the harmonic/percussive sound separation (HPSS) algorithm. Combined with the original spectra as the input of CNN, the network structure of CNN is designed, and the influence of different parameters in the network structure on the recognition rate is studied. Experiments on minority music data sets, compared with other scholars’ music recognition methods, it shows that this method can effectively improve the recognition of minority music styles using a single feature.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.