Abstract

When the current method is used to recognize music genre style, the extracted features are not fused, which leads to poor recognition effectiveness. Therefore, the application research based on multilevel local feature coding in music genre recognition is proposed. Features of music are extracted from timbre, rhythm, and pitch, and the extracted features are fused based on D-S evidence theory. The fused music features are input into the improved deep learning network, and the storage system structure is determined from the advantages of cloud storage availability, manageability, and expansibility. It is divided into four modules: storage layer, management layer, structure layer, and access layer. The model of music genre style recognition is constructed to realize the application research based on multilevel local feature coding in music genre recognition. The experimental results show that the recognition accuracy of the proposed method is always at a high level, and the mean square error positively correlated with the number of beats. After segmentation, the waveform is denser, which has a good application effect.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call