Abstract
The two main research threads in computer-based music generation are the construction of autonomous music-making systems and the design of computer-based environments to assist musicians. In the symbolic domain, the key problem of automatically arranging a piece of music was extensively studied, while relatively fewer systems tackled this challenge in the audio domain. In this contribution, we propose CycleDRUMS, a novel method for generating drums given a bass line. After converting the waveform of the bass into a mel-spectrogram, we can automatically generate original drums that follow the beat, sound credible, and be directly mixed with the input bass. We formulated this task as an unpaired image-to-image translation problem, and we addressed it with CycleGAN, a well-established unsupervised style transfer framework designed initially for treating images. The choice to deploy raw audio and mel-spectrograms enabled us to represent better how humans perceive music and to draw sounds for new arrangements from the vast collection of music recordings accumulated in the last century. In the absence of an objective way of evaluating the output of both generative adversarial networks and generative music systems, we further defined a possible metric for the proposed task, partially based on human (and expert) judgment. Finally, as a comparison, we replicated our results with Pix2Pix, a paired image-to-image translation network, and we showed that our approach outperforms it.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.