Abstract

In this paper, we propose a creative generation process model based on the quantum modeling simulation method. This model is mainly aimed at generating the running trajectory of a dancing robot and the execution plan of the dancing action. First, we used digital twin technology to establish data mapping between the robot and the computer simulation environment to realize intelligent controllability of the robot's trajectory and the dance movements described in this paper. Second, we conducted many experiments and carried out a lot of research into information retrieval, information fidelity, and result evaluation. We constructed a multilevel three-dimensional spatial quantum knowledge map (M-3DQKG) based on the coherence and entangled states of quantum modeling and simulation. Combined with dance videos, we used regions with convolutional neural networks (R-CNNs) to extract character bones and movement features to form a movement library. We used M-3DQKG to quickly retrieve information from the knowledge base, action library, and database, and then the system generated action models through a holistically nested edge detection (HED) network. The system then rendered scenes that matched the actions through generative adversarial networks (GANs). Finally, the scene and dance movements were integrated, and the creative generation process was completed. This paper also proposes the creativity generation coefficient as a means of evaluating the results of the creative process, combined with artificial brain electroenchalographic data to assist in evaluating the degree of agreement between creativity and needs. This paper aims to realize the automation and intelligence of the creative generation process and improve the creative generation effect and usability of dance movements. Experiments show that this paper has significantly improved the efficiency of knowledge retrieval and the accuracy of knowledge acquisition, and can generate unique and practical dance moves. The robot's trajectory is novel and changeable, and can meet the needs of dance performances in different scenes. The creative generation process of dancing robots combined with deep learning and quantum technology is a required field for future development, and could provide a considerable boost to the progress of human society.

Highlights

  • This paper proposes the concept of the “creative generation coefficient” (CGC), which is a standardized value used to map the system’s current complexity and information volume

  • This paper builds a creative generation process based on a quantum modeling simulation framework and proposes a method that can generate multiple creative schemes for dance movements

  • The results of the creative generation can be viewed through a computer simulation engine and VR devices so that people can feel immersed in the experience

Read more

Summary

Introduction

The region with convolutional neural networks (R-CNNs) recognizes characters’ actions and movement tracks in text, sound, and images, and extracts bone information and displacement point information. The holistically nested edge detection (HED) network generates abstract models of virtual human limbs as action templates by extracting information. In the 3D engine, we bind the generated action template and bone attributes to the character model and drive the character model to complete the dance motion simulation. The QGAN network generates performance scenes based on the information extracted by R-CNNs

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call