Abstract
Turbulence induces unsteady loads on autonomous underwater vehicles (AUVs) and may present a significant navigation challenge. This leads to elevated risks of mission failure or vehicle damage in proximity to obstacles. A scenario of particular interest is inspection of offshore structures that needs to be carried out at short range inside a turbulent wake. This work presents a control strategy based on reinforcement learning (RL) that has been designed to handle such a complex manoeuvring scenario. Training and evaluation is carried out using computational fluid dynamics (CFD) simulations of a simplified 2D geometry of similar manoeuvring characteristics to that of an AUV moving in the horizontal plane. Due to the high cost of the simulations, substantial emphasis has been placed on improving sampling efficiency of RL training using experience transfer from a computationally less demanding environment and quicker filling of the replay buffer by applying geometric transformations to the observations. The agent can navigate not only in the training environment, but also in a previously unseen flow generated by a large circular cylinder immersed in a current. The developed control strategy has also been interfaced with a path-following algorithm that allowed the controlled vehicle to carry out an inspection task.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have