Abstract
This study proposes a telemanipulation framework with two wearable IMU sensors without human skeletal kinematics. First, the states (intensity and direction) of spatial hand-guiding gestures are separately estimated through the proposed state estimator, and the states are also combined with the gesture’s mode (linear, angular, and via) obtained with the bi-directional LSTM-based mode classifier. The spatial pose of the 6-DOF manipulator’s end-effector (EEF) can be controlled by combining the spatial linear and angular motions based on integrating the gesture’s mode and state. To validate the significance of the proposed method, the teleoperation of the EEF to the designated target poses was conducted in the motion-capture space. As a result, it was confirmed that the mode could be classified with 84.5% accuracy in real time, even during the operator’s dynamic movement; the direction could be estimated with an error of less than 1 degree; and the intensity could be successfully estimated with the gesture speed estimator and finely tuned with the scaling factor. Finally, it was confirmed that a subject could place the EEF within the average range of 83 mm and 2.56 degrees in the target pose with only less than ten consecutive hand-guiding gestures and visual inspection in the first trial.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.