Abstract

Traditionally, music was considered an analog signal that had to be made by hand. In recent decades, music has been highlighted by technology that can autonomously compose a suite of music without any human interaction. To achieve this goal, this article suggests an autonomous music composition technique based on long short-term memory recurrent neural networks. Firstly, the music collection is split into music sequences based on unit time, and the Meier cepstrum coefficients of music audio are retrieved as features during music preprocessing. Secondly, the training samples composed of feature vectors processed by data were trained and predicted by short- and long-term memory models. Finally, the generated music sequence is spliced and fused to get new music. This article designs and performs experiments to demonstrate that our results are promising. From experimental results, this work gained that our model has the maximum accuracy of 99% and the lowest loss rate of 0.03.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.