Abstract

Music composition using artificial intelligence has gained increasing research attention recently. However, existing methods often generate music that needs more coherence and authenticity. This paper proposes an evolutionary computation-based deep learning approach for music composition with data analysis. Specifically, we utilize long short-term memory (LSTM) networks for generating melodic sequences and adopt a grey wolf optimizer to optimize LSTM hyperparameters. The training data is first converted to musical instrument digital interface (MIDI) format for data analysis, and melody lines are extracted using a similarity matrix method. The MIDI data is then encoded for input into the LSTM networks. The generated music is evaluated using objective metrics like mean squared error and subjective methods, including surveys of music professionals. Comparisons made to benchmark algorithms like generative adversarial networks demonstrate the advantages of our approach in accurately capturing tone, rhythm, artistic conception, and other attributes of high-quality music. The proposed mechanism provides a practical framework for AI-based music generation while ensuring authenticity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call