Abstract

Riemannian Motion Policies (RMPs) have recently been introduced as an online motion planning and policy synthesis framework that designs second-order motion policies defined on robot task spaces, and combines them into one global policy trading off between various motion objectives. Until now RMPs have only been applied either through direct joint-space acceleration control or inverse dynamics control assuming perfect knowledge of the system dynamics. It is unclear how RMPs can be implemented in the presence of dynamics modeling uncertainty and external disturbance. We address this by augmenting the existing RMP framework with a novel robust control Lyapunov function (RCLF) based inverse dynamics controller. This combination produces a fast, reactive, online motion planning and control framework that is also robust to system dynamics parameter uncertainty and external disturbance. We also propose a robust gain adaptation law that can automatically compensate for parameter uncertainty and external disturbance. We provide stability guarantees for the proposed RCLF controller with the gain adaptation law. Additionally, performance of the combined RMP-RCLF system is demonstrated on a 7-degree of freedom (DoF) robot manipulator arm in simulation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.