This study introduces a facial expression-driven interaction method for small quadruped robots, featuring three layers: Perception (L1), Analysis (L2), and Expression (L3). L1 focuses on data acquisition and image stabilization, L2 on model training and emotion classification, and L3 on control feedback and interactive movements. The core NXEIK network model comprises the NAFNet for image stabilization, Mini-Xception for facial recognition, EANet for action mapping, and an inverse kinematics model. Validation on a self-designed robot platform demonstrated the method's ability to enhance jittery image data and outperform deep learning networks like ViT and ResNet-101 in facial expression classification. The NXEIK model enables the robot to adapt its movements to various expressions using only three parameter types. This research provides a feasible solution for enhancing human-robot interaction and movement design for flexible quadruped robots through facial expressions.
Read full abstract