Abstract

This study introduces a facial expression-driven interaction method for small quadruped robots, featuring three layers: Perception (L1), Analysis (L2), and Expression (L3). L1 focuses on data acquisition and image stabilization, L2 on model training and emotion classification, and L3 on control feedback and interactive movements. The core NXEIK network model comprises the NAFNet for image stabilization, Mini-Xception for facial recognition, EANet for action mapping, and an inverse kinematics model. Validation on a self-designed robot platform demonstrated the method's ability to enhance jittery image data and outperform deep learning networks like ViT and ResNet-101 in facial expression classification. The NXEIK model enables the robot to adapt its movements to various expressions using only three parameter types. This research provides a feasible solution for enhancing human-robot interaction and movement design for flexible quadruped robots through facial expressions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.