Abstract

In order to study the role of generative adversarial network (GAN) in music generation, this article creates a convolutional GAN-based Midinet as a baseline model through the music generation process and creative psychological education and GAN principle. Additionally, it proposes a music generation model based on music theory rules and a chord-constrained GAN dual-track music generation model. Based on this model, a deep chord gated recurrent neural generative adversarial network (DCG_GAN) is proposed. The generated melodies are evaluated in both subjective and objective directions. The results show that the three evaluation indicators of DCG_GAN have the highest scores in the subjective evaluation. The average score given by ordinary listeners reaches 3.76 points, and the professional score reaches 3.58 points, which are 0.69 and 1.31 points higher than the baseline model, respectively. In the objective evaluation, DCG_GAN is improved by 8.075% in empty bars rate (EBR). The UPC (num_chroma_used) evaluation index value of the DCG_GAN model is improved by 0.52 compared with the baseline model. The qualified note ratio (QNR) evaluation index value is improved by up to 4.46% among the five audio tracks. The proposed overall style-based music generation model has superior performance in music generation. Both subjective and objective evaluations show that the generated music is more favored by the audience, indicating that the combination of deep learning and GAN has a great effect on music generation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call