Abstract

Estimating the joint torques of lower limbs in human gait, known as motion intent understanding, is of great significance in the control of lower limb exoskeletons. This study presents novel soft smart shoes designed for motion intent learning at unspecified walking speeds using long short-term memory with a convolutional autoencoder. The smart shoes serve as a wearable sensing system consisting of a soft instrumented sole and two 3D motion sensors that are nonintrusive to the human gait and comfortable for the wearers. A novel data structure is developed as a “sensor image” for the measured ground reaction force and foot motion. A convolutional autoencoder is established to fuse multisensor datasets and extract the hidden features of the sensor images, which represent the spatial and temporal correlations among the data. Then, long short-term memory is exploited to learn the multiscale, highly nonlinear input-output relationships between the acquired features and joint torques. Experiments were conducted on five subjects at three walking speeds (0.8 m/s, 1.2 m/s, and 1.6 m/s). Results showed that 98% of the r2 values were acceptable in individual testing and 75% of the r <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> values were acceptable in interindividual testing. The proposed method is able to learn the join torques in human gait and has satisfactory generalization properties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call