Abstract

An amputee cannot directly use the perceptual visual information to control the movements and gait patterns of his worn prosthesis. In order to help an amputee walk and cross over obstacles smoothly, the paper proposes a vision-locomotion coordination control method for a powered lower-limb prosthesis (PLLP), in which a vision system is proposed to detect obstacles, and a complete vision-locomotion loop is then constructed. With deep learning techniques, the vision system can recognize common obstacles (e.g., garbage cans, bricks and boxes) and obtain the features of obstacles (e.g., distance from obstacles to the depth camera and height of obstacles). Through integrating the vision system into the locomotion control system, the PLLP can make obstacle avoidance decisions and use dynamic movement primitives with type-2 fuzzy models (T2FDMPs) to help amputees cross over obstacles simultaneously. Utilizing the type-2 fuzzy models, smooth trajectories for crossing over obstacles can be obtained. The experimental results show that the PLLP with visual information can switch an amputee’s gaits between level walking and obstacle avoidance adaptively, which demonstrates the effectiveness of the vision-locomotion coordination control system. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —The paper is motivated by the challenge of obstacle avoidance of prostheses. Traditional prostheses cannot achieve autonomous obstacle avoidance because they lack of environmental perception capability. In addition, most prostheses always utilize use the human-robot interaction between amputees and prostheses to recognize the environments. However, due to noise and individual differences, recognition results are not accurate enough. We found that the integration of vision can improve the accuracy and efficiency of environmental recognition. Therefore, it is necessary to construct a complete vision and locomotion closed-loop. In the paper, a vision-locomotion coordination control is proposed, and to make the PLLP cross over obstacles smoothly, a novel trajectory shaping with fuzzy-based dynamic movement primitives is developed. The coordination control is partitioned into the obstacle detection, trajectory shaping and joint control, which can help the PLLP fulfill several obstacle avoidance tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.