Abstract

Rehabilitation devices such as actuated exoskeletons can provide mobility assistance for patients suffering from paralysis or muscle weakness. In order to improve the well-being of patients, the control design of exoskeletons is of paramount importance and highest priority. In this paper, we present a sliding reinforcement learning (RL) method control for an upper-limb exoskeleton, enabling it to learn following a desired trajectory in the Cartesian space. The deep deterministic policy gradient (DDPG) using an actor-critic architecture is employed to continuously adjust the non-singular terminal sliding mode control (NSTSMC) control inputs, based on previous experiences. The designed actor network learns the policy and the critic evaluates the quality of the actions chosen by the actor. The robustness of the proposed approach is studied when the system is subjected to random disturbances. The simulation results demonstrate that the proposed approach based on the RL method effectively fulfills exoskeleton tracking tasks. Moreover, a comparative analysis with the standard NSTSMC, computed torque (CT), and RL-based CT shows the superiority of the proposed approach in terms of position tracking error. These findings are further confirmed by various performance evaluation metrics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.