Abstract

Walking-assistive devices require adaptive control methods to ensure smooth transitions between various modes of locomotion. For this purpose, detecting human locomotion modes (e.g., level walking or stair ascent) in advance is crucial for improving the intelligence and transparency of such robotic systems. This study proposes Deep-STF, a unified end-to-end deep learning model designed for integrated feature extraction in spatial, temporal, and frequency dimensions from surface electromyography (sEMG) signals. Our model enables accurate and robust continuous prediction of nine locomotion modes and 15 transitions at varying prediction time intervals, ranging from 100 to 500 ms. Experimental results showcased Deep-STP's cutting-edge prediction performance across diverse locomotion modes and transitions, relying solely on sEMG data. When forecasting 100 ms ahead, Deep-STF achieved an improved average prediction accuracy of 96.60%, outperforming seven benchmark models. Even with an extended 500ms prediction horizon, the accuracy only marginally decreased to 93.22%. The averaged stable prediction times for detecting next upcoming transitions spanned from 31.47 to 371.58 ms across the 100-500 ms time advances. Although the prediction accuracy of the trained Deep-STF initially dropped to 71.12% when tested on four new terrains, it achieved a satisfactory accuracy of 92.51% after fine-tuning with just 5 trials and further improved to 96.27% with 15 calibration trials. These results demonstrate the remarkable prediction ability and adaptability of Deep-STF, showing great potential for integration with walking-assistive devices and leading to smoother, more intuitive user interactions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.