Abstract

In this article, an effective control system based on a deep learning (DL) algorithm is designed to administer and actuate an in-house 3-D printed robotic hand using myoelectric sensors. Five servo motors are utilized to actuate the robotic hand’s components with 17 degrees of freedom (DoFs). This research investigated using a minimum number of myoelectric sensors to actuate robotic hands and fingers. A convolutional neural network (CNN) is also employed to recognize hand and finger movements’ patterns, and classify gestures. Experiments are conducted in the presence of one to three sensors on the forearm and verified by following both moving and stationary reference hands. Applying various configurations, it is found that at least three sensors are required to move all fingers as well as perform open and close postures. Considering pure data collected by sensors leads to unintended shakes in the fingers due to the fact that a large amount of input data is translated to motor rotation instantaneously. This challenge is overcome by applying a CNN to classify the hand gestures with high accuracy. The presented design is an inexpensive, simple, and easy-to-use robotic hand both in fabrication and classification, which can be used by everyone, particularly resource-poor amputees. Experimental results confirm the effectiveness of the robotic hand for use in below-elbow amputations, such as wrist disarticulation and transradial amputation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call