In general, dance is always associated with music to improve stage performance effect. As we know, artificial music arrangement consumes a lot of time and manpower. While automatic music arrangement based on input dance video perfectly solves this problem. In the cross-modal music generation task, we take advantage of the complementary information between two input modalities of facial expressions and dance movements. Then we present Dance2MusicNet (D2MNet), an autoregressive generation model based on dilated convolution, which adopts two feature vectors, dance style and beats, as control signals to generate real and diverse music that matches dance video. Finally, a comprehensive evaluation method for qualitative and quantitative experiment is proposed. Compared to baseline methods, D2MNet outperforms better in all evaluating metrics, which clearly demonstrates the effectiveness of our framework.
Read full abstract