Abstract

Autonomous lower-limb exoskeletons must modulate assistance based on locomotion mode (e.g., ramp or stair ascent) to adapt to the corresponding changes in human biological joint dynamics. However, current mode classification strategies for exoskeletons often require user-specific tuning, have a slow update rate, and rely on additional sensors outside of the exoskeleton sensor suite. In this study, we introduce a deep convolutional neural network-based locomotion mode classifier for hip exoskeleton applications using an open-source gait biomechanics dataset with various wearable sensors. Our approach removed the limitations of previous systems as it is 1) subject-independent (i.e., no user-specific data), 2) capable of continuously classifying for smooth and seamless mode transitions, and 3) only utilizes minimal wearable sensors native to a conventional hip exoskeleton. We optimized our model, based on several important factors contributing to overall performance, such as transition label timing, model architecture, and sensor placement, which provides a holistic understanding of mode classifier design. Our optimized DL model showed a 3.13% classification error (steady-state: 0.80 ± 0.38% and transitional: 6.49 ± 1.42%), outperforming other machine learning-based benchmarks commonly practiced in the field (p<0.05). Furthermore, our multi-modal analysis indicated that our model can maintain high performance in different settings such as unseen slopes on stairs or ramps. Thus, our study presents a novel locomotion mode framework, capable of advancing robotic exoskeleton applications toward assisting community ambulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call