Abstract

In recent years, with the development of deep neural network becoming more and more mature, especially after the proposal of generative confrontation mechanism, academia has made many achievements in the research of image, video and text generation. Therefore, scholars began to use similar attempts in the research of music generation. Therefore, based on the existing theoretical technology and research work, this paper studies music production, and then proposes an intelligent music production technology based on generation confrontation mechanism to enrich the research in the field of computer music generation. This paper takes the music generation method based on generation countermeasure mechanism as the research topic, and mainly studies the following: after studying the existing music generation model based on generation countermeasure network, a time structure model for maintaining music coherence is proposed. In music generation, avoid manual input and ensure the interdependence between tracks. At the same time, this paper studies and implements the generation method of discrete music events based on multi track, including multi track correlation model and discrete processing. The lakh MIDI data set is studied. On this basis, the lakh MIDI is pre-processed to obtain the LMD piano roll data set, which is used in the music generation experiment of MCT-GAN. When studying the multi track music generation based on generation countermeasure network, this paper studies and analyzes three models, and puts forward the multi track music generation method based on CT-GAN, which mainly improves the existing music generation model based on GAN. Finally, the generation results of MCT-GAN are compared with those of Muse-GAN, so as to reflect the improvement effect of MCT-GAN. Select 20 auditees to listen to the generated music and real music and distinguish them. Finally, analyze them according to the evaluation results. After evaluation, it is concluded that the research effect of multi track music generation based on CT-GAN is improved.

Highlights

  • As an important way of expression in the field of art, music embodies a series of human unique thinking modes and is a unified combination of regularity and creativity [1]

  • Rough the investigation, it is found that the number of literature review on algorithmic composition is relatively large, but the literature review on music generation based on deep learning, especially based on generating confrontation network, is relatively missing

  • Among the testers whose accuracy is in the range of 0%–25%, most of the testers cannot distinguish after listening to the music, and choose the uncertainty C; After the test, when interviewing the testers whose accuracy is in the range of 75%–100%, it is found that some testers have guessing behavior in discrimination. erefore, from the results of manual evaluation, it can be found that the music production method proposed in this paper is effective

Read more

Summary

Introduction

As an important way of expression in the field of art, music embodies a series of human unique thinking modes and is a unified combination of regularity and creativity [1]. When describing the experimental results of their own model algorithms, many related works in the field of music generation use the test method of organizing volunteers for auditory recognition [7]. Intelligent music was mainly generated in two ways It was created based on statistical analysis and combined with Markov chain models. Deep learning technology greatly improves the accuracy of image classification, even exceeds human classification ability, and has successful applications in the field of natural language processing. Rough the investigation, it is found that the number of literature review on algorithmic composition is relatively large, but the literature review on music generation based on deep learning, especially based on generating confrontation network, is relatively missing

Related Work
Generation of Derivative Models of Countermeasure Networks
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.