Abstract
Detecting human movement intentions is fundamental to neural control of robotic exoskeletons, as it is essential for achieving seamless transitions between different locomotion modes. In this study, we enhanced a muscle synergy-inspired method of locomotion mode identification by fusing the electromyography data with two types of data from wearable sensors (inertial measurement units), namely linear acceleration and angular velocity. From the finite state machine perspective, the enhanced method was used to systematically identify 2 static modes, 7 dynamic modes, and 27 transitions among them. In addition to the five broadly studied modes (level ground walking, ramps ascent/descent, stairs ascent/descent), we identified the transition between different walking speeds and modes of ramp walking at different inclination angles. Seven combinations of sensor fusion were conducted, on experimental data from 8 able-bodied adult subjects, and their classification accuracy and prediction time were compared. Prediction based on a fusion of electromyography and gyroscope (angular velocity) data predicted transitions earlier and with higher accuracy. All transitions and modes were identified with a total average classification accuracy of 94.5% with fused sensor data. For nearly all transitions, we were able to predict the next locomotion mode 300-500ms prior to the step into that mode.
Highlights
R OBOTIC exoskeletons have extended their movement assistance focus from only level-ground walking to include various locomotion modes, such as ramp walking and stair ascent/descent
We aim to investigate whether classification accuracy of a muscle synergy-inspired method in locomotion mode identification can be improved by fusing EMG signals with signals from mechanical sensors
Transitions predicted with sensor configurations 2-5, with a fusion of EMG (Ipsi or Bi) with Bi Acc and/or Gyr, and sensor configurations 6 and 7, with only mechanical sensor data, had higher classification accuracy (CA) than those predicted from sensor configuration 1 with only Ipsi EMG data (p < 0.05)
Summary
R OBOTIC exoskeletons have extended their movement assistance focus from only level-ground walking to include various locomotion modes, such as ramp walking and stair ascent/descent. To efficiently assist multiple locomotion modes, powered exoskeletons should transition between different locomotion modes seamlessly. A few commerciallyavailable exoskeletons adjust their assistance strategies between different modes using signals from remote controls, non-intuitive muscle contractions, and physical signals such as tapping the foot [1], [2], none of which provide seamless transitions between different locomotion modes. Achieving seamless transitions between locomotion modes requires accurate movement intention detection. To this end, both model-based approaches, i.e. using neural driven musculoskeletal models [3], and model-free approaches, i.e. Y.-X.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Systems and Rehabilitation Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.