Abstract
Imitation learning (IL) facilitates intuitive robotic programming. However, ensuring the reliability of learned behaviors remains a challenge. In the context of reaching motions, a robot should consistently reach its goal, regardless of its initial conditions. To meet this requirement, IL methods often employ specialized function approximators that guarantee this property by construction. Although effective, these approaches come with some limitations: 1) they are typically restricted in the range of motions they can model, resulting in suboptimal IL capabilities, and 2) they require explicit extensions to account for the geometry of motions that consider orientations. To address these challenges, we introduce a novel stability loss function that does not constrain the function approximator's architecture and enables learning policies that yield accurate results. Furthermore, it is not restricted to a specific state space geometry; therefore, it can easily incorporate the geometry of the robot's state space. Proof of the stability properties induced by this loss is provided and the method is empirically validated in various settings. These settings include Euclidean and non‐Euclidean state spaces, as well as first‐order and second‐order motions, both in simulation and with real robots. More details about the experimental results can be found at https://youtu.be/ZWKLGntCI6w.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.