Abstract
A prosthetic hand is one of the main ways to help patients with upper limb disabilities regain their daily living abilities. Prosthetic hand manipulation must be coordinated with the user’s action intention. Therefore, the key to the control of the prosthetic hand is to recognize the action intention of the upper limb. At present, there are still problems such as difficulty in decoding information and a low recognition rate of identifying action intention with EMG signals and EEG signals. While inertial sensors have the advantages of low cost and high accuracy and posture information can characterize the upper limb motion state, visual information has the advantages of high information and being able to detect the type of target objects, which can be complementarily fused with inertial sensors to further grasp the human motion requirements. Therefore, this paper proposes an upper limb action intention recognition method based on the fusion of posture information and visual information. The inertial sensor is used to collect the attitude angle data during the movement of the upper limb, and according to the similarity of the human upper limb structure to the linkage mechanism, a model of the upper limb of the human body is established using the positive kinematics theory of a mechanical arm to solve the upper limb end positions. The upper limb end positions were classified into three categories: torso front, upper body nearby, and the initial position, and a multilayer perceptron model was trained to learn the classification relationships. In addition, a miniature camera was installed on the hand to obtain visual image information during upper limb movement. The target objects are detected using the YOLOv5 deep learning method, and then, the target objects are classified into two categories: wearable items and non-wearable items. Finally, the upper limb intention is jointly decided by the upper limb motion state, target object type, and upper limb end position to achieve the control of the prosthetic hand. We applied the upper limb intention recognition method to the experimental system of a mechanical prosthetic hand and invited several volunteers to test it. The experimental results showed that the intention recognition success rate reached 92.4%, which verifies the feasibility and practicality of the upper limb action intention recognition method based on the fusion of posture information and visual information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.