Abstract

Touch-free guided hand gesture recognition for human-robot interactions plays an increasingly significant role in teleoperated surgical robot systems. Indeed, despite depth cameras provide more practical information for recognition accuracy enhancement, the instability and computational burden of depth data represent a tricky problem. In this letter, we propose a novel multi-sensor guided hand gesture recognition system for surgical robot teleoperation. A multi-sensor data fusion model is designed for performing interference in the presence of occlusions. A multilayer Recurrent Neural Network (RNN) consisting of a Long Short-Term Memory (LSTM) module and a dropout layer (LSTM-RNN) is proposed for multiple hand gestures classification. Detected hand gestures are used to perform a set of human-robot collaboration tasks on a surgical robot platform. Classification performance and prediction time is compared among the LSTM-RNN model and several traditional Machine Learning (ML) algorithms, such as k-Nearest Neighbor (k-NN) and Support Vector Machines (SVM). Results show that the proposed LSTM-RNN classifier is able to achieve a higher recognition rate and faster inference speed. In addition, the present adaptive data fusion system shows a strong anti-interference capability for hand gesture recognition in real-time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call