Online reinforcement learning (RL) methods for autonomous underwater vehicles (AUV) are time-consuming and unsafe due to the need for real-world interaction. Offline RL methods can improve efficiency and safety by training with dynamic models, but an accurate model for AUV is difficult to obtain due to its highly nonlinear dynamics. These limit the application of RL methods in AUV control. To solve this issue, we propose physics-informed model-based conservative offline policy optimization (PICOPO). It offers the advantages of small dataset, strong generalizability and high safety by combining the physics-informed dynamic modelling method and the offline RL technique. First, the PICOPO constructs a physics-informed model based on a small offline dataset to serve as the digital twins (DT) of the actual AUV. This DT can forecast the long-term motion states of AUV with high-precision. The RL-based controller is then trained offline within this DT, eliminating the need for real-world interaction and allowing direct deployment to the AUV without fine-tuning. In this paper, simulations and field tests are carried out to evaluate the proposed method. Our results demonstrate that PICOPO achieves accurate motion control with just 2000 samples and enables zero-shot sim-to-real transfer, showcasing strong generalizability across various motion control tasks.
Read full abstract