Abstract

Riemannian Motion Policies (RMPs) have recently been introduced as an online motion planning and policy synthesis framework that designs second-order motion policies defined on robot task spaces, and combines them into one global policy trading off between various motion objectives. Until now RMPs have only been applied either through direct joint-space acceleration control or inverse dynamics control assuming perfect knowledge of the system dynamics. It is unclear how RMPs can be implemented in the presence of dynamics modeling uncertainty and external disturbance. We address this by augmenting the existing RMP framework with a novel robust control Lyapunov function (RCLF) based inverse dynamics controller. This combination produces a fast, reactive, online motion planning and control framework that is also robust to system dynamics parameter uncertainty and external disturbance. We also propose a robust gain adaptation law that can automatically compensate for parameter uncertainty and external disturbance. We provide stability guarantees for the proposed RCLF controller with the gain adaptation law. Additionally, performance of the combined RMP-RCLF system is demonstrated on a 7-degree of freedom (DoF) robot manipulator arm in simulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call