Abstract

One promising function of interactive robots is to provide a specific interaction force to human users. For example, rehabilitation robots are expected to promote patients' recovery by interacting with them with a prescribed force. However, motion uncertainties of different individuals, which are hard to predict due to the varying motion speed and noises during motion, degrade the performance of existing control methods. This paper proposes a method to learn a desired reference trajectory for a robot based on dynamic motion primitives (DMPs) and iterative learning (IL). By controlling the robot to follow the generated desired reference trajectory, the interaction force can achieve a desired value. In our proposed approach, DMPs are first employed to parameterize the demonstration trajectories of the human user. Then a recursive least square (RLS)-based estimator is developed and combined with the Adam optimization method to update the trajectory parameters so that the desired reference trajectory of the robot is iteratively obtained by resolving the DMPs. Since the proposed method parameterizes the trajectories depending on the phrase variable, it removes the essential assumption of traditional IL methods where the iteration period should be invariant, and thus has improved robustness compared with the existing methods. Experiments are performed using an interactive robot to validate the effectiveness of our proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call