The two main research threads in computer-based music generation are the construction of autonomous music-making systems and the design of computer-based environments to assist musicians. In the symbolic domain, the key problem of automatically arranging a piece of music was extensively studied, while relatively fewer systems tackled this challenge in the audio domain. In this contribution, we propose CycleDRUMS, a novel method for generating drums given a bass line. After converting the waveform of the bass into a mel-spectrogram, we can automatically generate original drums that follow the beat, sound credible, and be directly mixed with the input bass. We formulated this task as an unpaired image-to-image translation problem, and we addressed it with CycleGAN, a well-established unsupervised style transfer framework designed initially for treating images. The choice to deploy raw audio and mel-spectrograms enabled us to represent better how humans perceive music and to draw sounds for new arrangements from the vast collection of music recordings accumulated in the last century. In the absence of an objective way of evaluating the output of both generative adversarial networks and generative music systems, we further defined a possible metric for the proposed task, partially based on human (and expert) judgment. Finally, as a comparison, we replicated our results with Pix2Pix, a paired image-to-image translation network, and we showed that our approach outperforms it.
Read full abstract