Abstract

Recurrent neural networks (RNNs) are now widely used on sequence generation tasks due to their ability to learn long-range dependencies and to generate sequences of arbitrary length. However, their left-to-right generation procedure only allows a limited control from a potential user which makes them unsuitable for interactive and creative usages such as interactive music generation. This article introduces a novel architecture called anticipation-RNN which possesses the assets of the RNN-based generative models while allowing to enforce user-defined unary constraints. We demonstrate its efficiency on the task of generating melodies satisfying unary constraints in the style of the soprano parts of the J.S. Bach chorale harmonizations. Sampling using the anticipation-RNN is of the same order of complexity than sampling from the traditional RNN model. This fast and interactive generation of musical sequences opens ways to devise real-time systems that could be used for creative purposes.

Highlights

  • A number of powerful generative models on symbolic music have been proposed [14]

  • In order to solve issues raised by the left-to-right sampling scheme, approaches based on MCMC methods have been proposed, in the context of monophonic sequences with shallow models [25] or on polyphonic musical pieces using deeper models [12, 13]

  • We presented the anticipation-recurrent neural networks (RNNs), a simple but efficient way to generate sequences in a learned style while enforcing unary constraints

Read more

Summary

Introduction

A number of powerful generative models on symbolic music have been proposed [14]. In order to solve issues raised by the left-to-right sampling scheme, approaches based on MCMC methods have been proposed, in the context of monophonic sequences with shallow models [25] or on polyphonic musical pieces using deeper models [12, 13] If these MCMC methods allow to generate musically convincing sequences while enforcing many user-defined constraints, the generation process is generally order of magnitudes longer than the Neural Computing and Applications (2020) 32:995–1005 simpler left-to-right generation scheme. This can prevent for instance using these models in real-time settings. Code is available at https://github.com/Ghadjeres/ Anticipation-RNN, and the musical examples presented in this article can be listened to on the accompanying Web site: https://sites.google.com/view/anticipation-rnn-exam ples/accueil

Statement of the problem
The model
Implementation details
Enforcing the constraints
Anticipation capabilities
Sampling with the correct probabilities
Musical examples
Conclusion
Findings
Compliance with ethical standards
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.