Abstract

Lower limb exoskeleton (LLE) has received considerable interests in strength augmentation, rehabilitation and walking assistance scenarios. For walking assistance, the LLE is expected to have the capability of controlling the affected leg to track the unaffected leg’s motion naturally. An important issue in this scenario is that the exoskeleton system needs to deal with unpredictable disturbance from the patient, which requires the controller of exoskeleton system to have the ability to adapt to different wearers. This paper proposes a novel Data-Driven Reinforcement Learning (DDRL) control strategy to adapt different hemiplegic patients with unpredictable disturbances. In the proposed DDRL strategy, the interaction between two lower limbs of LLE and the legs of hemiplegic patient are modeled in the context of leader-follower framework. The walking assistance control problem is transformed into a optimal control problem. Then, a policy iteration (PI) algorithm is introduced to learn optimal controller. To achieve online adaptation control for different patients, based on PI algorithm, an Actor-Critic Neural Network (ACNN) technology of the reinforcement learning (RL) is employed in the proposed DDRL. We conduct experiments both on a simulation environment and a real LLE system. Experimental results demonstrate that the proposed control strategy has strong robustness against disturbances and adaptability to different pilots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call