Abstract
Online reinforcement learning (RL) methods for autonomous underwater vehicles (AUV) are time-consuming and unsafe due to the need for real-world interaction. Offline RL methods can improve efficiency and safety by training with dynamic models, but an accurate model for AUV is difficult to obtain due to its highly nonlinear dynamics. These limit the application of RL methods in AUV control. To solve this issue, we propose physics-informed model-based conservative offline policy optimization (PICOPO). It offers the advantages of small dataset, strong generalizability and high safety by combining the physics-informed dynamic modelling method and the offline RL technique. First, the PICOPO constructs a physics-informed model based on a small offline dataset to serve as the digital twins (DT) of the actual AUV. This DT can forecast the long-term motion states of AUV with high-precision. The RL-based controller is then trained offline within this DT, eliminating the need for real-world interaction and allowing direct deployment to the AUV without fine-tuning. In this paper, simulations and field tests are carried out to evaluate the proposed method. Our results demonstrate that PICOPO achieves accurate motion control with just 2000 samples and enables zero-shot sim-to-real transfer, showcasing strong generalizability across various motion control tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.