As a form of artistic expression, dance accompanied by music enriches the cultural life of human beings and stimulates the creative enthusiasm of the public. Choreography is usually done by professional choreographers. It is highly professional and time-consuming. The development of technology is changing the way of artistic creation. The development of motion capture technology and artificial intelligence makes computer-based automatic choreography possible. This paper proposes a method of music choreography based on deep learning. First, we use Kinect to extract and filter actions and get actions with high authenticity and continuity. Then, based on the constant Q transformation, the overall note density and beats per minute (BPM) of the target music are extracted, and preliminary matching is performed with features such as action speed and spatiality, and then, the local features of the music and action segments based on rhythm and intensity are matched. The experimental results show that the method proposed in this paper can effectively synthesize dance movements. The speed and other characteristics of each movement segment in the synthesis result are very uniform, and the overall choreography is more aesthetic.
Read full abstract