Abstract

When the current method is used to recognize music genre style, the extracted features are not fused, which leads to poor recognition effectiveness. Therefore, the application research based on multilevel local feature coding in music genre recognition is proposed. Features of music are extracted from timbre, rhythm, and pitch, and the extracted features are fused based on D-S evidence theory. The fused music features are input into the improved deep learning network, and the storage system structure is determined from the advantages of cloud storage availability, manageability, and expansibility. It is divided into four modules: storage layer, management layer, structure layer, and access layer. The model of music genre style recognition is constructed to realize the application research based on multilevel local feature coding in music genre recognition. The experimental results show that the recognition accuracy of the proposed method is always at a high level, and the mean square error positively correlated with the number of beats. After segmentation, the waveform is denser, which has a good application effect.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.