Abstract

To improve the space-time similarity and motion smoothness of facial expression imitation (FEI), a real-time FEI method for a humanoid robot is proposed based on smooth-constraint reversed mechanical model (SRMM) by combining a sequence-to-sequence deep learning model and a motion-smoothing constraint. First, on the basis of facial data from a Kinect capture device, a facial feature vector is characterized based on 3 head postures, 17 facial animation units, and facial geometric deformation cascaded by Laplace coordinates. Second, a reversed mechanical model is constructed via a multilayer long short-term memory neural network to accomplish direct mapping from facial feature sequences to motor position sequences. Additionally, to overcome the motor chattering phenomenon during real-time FEI, a high-order polynomial is constructed to fit the position sequence of motors, and an SRMM is proposed and designed based on the deviation of position, velocity, and acceleration. Finally, aiming to imitate the real-time facial feature sequences of a performer captured from Kinect, the optimal position sequences generated based on the SRMM is sent to the hardware system to keep the space-time characteristics consistent with those of the performer. The experimental results demonstrate that the motor position deviation of the SRMM is less than 8%. The space-time similarity between the robot and the performer is greater than 85%, and the motion smoothness of the online FEI exceeded 90%. Compared with other related methods, the proposed method achieves a remarkable improvement in motor position deviation, space-time similarity, and motion smoothness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call