Abstract
Abstract This study introduces a sophisticated music recognition classification system designed to preprocess musical samples from various instruments and extract cepstrum coefficients. These coefficients are subsequently utilized to train the tuning parameters of a deep confidence network. Once trained, the model is employed to predict the types of instruments used in test samples. Additionally, this paper explores the innovative application of a digital sound effector to process audio from the erhu and popular music, aiming to achieve a harmonious integration of frequency and pitch, thereby facilitating both reference and innovation of the erhu within the realm of popular music. The analysis delineates that within the digital sound effector, the pitch accumulation plugin and the stirring effect plugin exert a more pronounced influence on the fusion of erhu sounds with pop music compared to the comb filter plugin. The integration of frequencies is predominantly observed in two key segments of the composition. The first segment, spanning from 1’30” to 3’25”, showcases the fusion within bars 21-85 of the erhu piece “Chu Song”. The second segment, occurring from 4’68” to 7’49”, employs equalizer and delay effects to blend electronic music elements with the erhu, enhancing the piece’s contemporary relevance and appeal. This innovative approach underscores the potential of digital effects in bridging traditional Chinese musical instruments with modern music genres.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.