Abstract

Abstract In this paper, we first use the short-time Fourier transform method to extract statistical features in the frequency domain of vocal music. The extracted features are fused using D − S -evidence theory. The fused vocal features are inputted into the improved deep learning network to construct a vocal singing style classification model. Secondly, the requirements of vocal music resources according to the classification of song styles are constructed for the vocal singing resource library system. Finally, the vocal music resource library system undergoes testing in all directions to ensure it meets both functional and performance requirements. The results show that under the respective optimal threads of the vocal music resource library, the number of DM7 network reads and writes remains between 200 and 300 kb, and the random read performance of HBase reaches 8340 TPS, indicating that the resource library provides users with a fast and convenient way to retrieve multidimensional resources. This paper provides a long-term reference for the preservation and use of vocal singing resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call