Abstract

Since existing music-driven dance generation methods have abnormal motion when generating dance sequences which leads to unnatural overall dance movements, a music-driven dance generation method based on a spatial-temporal refinement model is proposed to optimize the abnormal frames. Firstly, the cross-modal alignment model is used to learn the correspondence between the two modalities of audio and dance video and based on the learned correspondence, the corresponding dance segments are matched with the input music segments. Secondly, an abnormal frame optimization algorithm is proposed to carry out the optimization of the abnormal frames in the dance sequence. Finally, a temporal refinement model is used to constrain the music beats and dance rhythms in the temporal perspective to further strengthen the consistency between the music and the dance movements. The experimental results show that the proposed method can generate realistic and natural dance video sequences, with the FID index reduced by 1.2 and the diversity index improved by 1.7.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call