Abstract
For lower-limb amputees, active prostheses are becoming an increasingly viable option, enabling them to enhance their mobility and living conditions. To guarantee efficient prosthetics control, the terrain environment needs to be considered. For this purpose, we developed a locomotion mode recognition (LMR) framework implementing Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) with image encoding to classify seven terrains. We employed three image encoding methods to transform inertial sensor data into activity images, encompassing signal images, Gramian angular field, and Mel spectrogram. The system uses a single Inertial measurement unit (IMU) sensor placed on the shank, thereby avoiding higher computational costs and reducing additional load on the user’s body. Different locomotion mode recognition configurations are investigated and compared by combining different network and image encoding representations. The results show that the proposed LMR-Net with spectrogram input is the best model, able to classify locomotion mode with an average F1 score of 0.9744 and a time of 31 ms, which is less than 300 ms, the maximum latency allowed to avoid causing discomfort to the prosthesis’ user. The promising results obtained in this research open the way for using deep learning for the control of low limb prosthesis with a minimal number of inertial measurement units, assisting users on different terrains.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.