AbstractIn film and television animation works, animated characters are the soul and core of the work. The behavior, language expression, and emotional expression of animated characters play an important role in the expression of the animation theme and content. Aiming at the problem that the mobile animation system can only add and change actions for a single virtual character, and the characters cannot interact with each other, this paper analyzes the technical principles, technical characteristics, and application scope of human–computer interaction (HCI), taking sensors as the research object. An algorithm for separating the human body from the background environment in the depth image is proposed. Through the calculation of the depth value, the calculation results are compared, and the target human body and the background are effectively separated. In the depth data processing, the algorithm of judging the pixel offset value is used to identify the body part, and a sensor‐based HCI system is designed. The depth‐of‐field data map acquired by the sensor is used to identify human body parts and determine actions, thereby realizing HCI based on action recognition. Simulation test results show that the effective rate of the system is 80%, and the design of animated characters can be put into the visualization stage. Using the algorithm in this paper, the physical signs of the animated characters can be quickly identified, so that the next action of the animation can be more clearly captured. Has a certain practical value.
Read full abstract