Both the discrete motion states and continuous joint kinematics are essential for controlling assistive robots under changeable environmental conditions. However, few studies investigate both the discrete and continuous motion intents. This paper is the first work to propose an end-to-end motion intent decoding method that integrates recognition of discrete locomotion modes and prediction of continuous joint kinematics and gait events. First, we propose a data-driven approach to segment the transitional periods of the adjacent locomotion modes and determine their boundaries. Second, we build a CNN-LSTM network that utilizes locomotion mode classification as the prior of joint kinematics and gait events prediction and thus decodes discrete and continuous motion intents. Finally, we evaluate our method through extensive experiments and comparisons. The proposed method for locomotion mode recognition achieves an accuracy of 98.73% for steady-state and 97.53% for transitional periods. For knee angle prediction, the method achieves the ahead-of-time prediction with NRMSE lower than 8.41%, R-value higher than 0.93, and R-square higher than 0.88 for various prediction time. For gait event prediction, it predicts the future timing of events with an error of 8.46 ms for initial contact, 32.76 ms for heel off, and 8.91 ms for toe off. This study reveals the feasibility of predicting locomotion-based kinematics and gait events on various terrains. The rich motion intents studied in this work highlight the potential of a more detailed lower-limb rehabilitation system.
Read full abstract