Abstract

The cooperative execution of complex tasks can lead to desirable outcomes and increase the likelihood of mission success. Nevertheless, coordinating the movements of multiple autonomous underwater vehicles (AUVs) in a collaborative manner is challenging due to nonlinear dynamics and environmental disturbances. The paper presents a decentralized deep reinforcement learning algorithm for AUVs that enables cooperative motion planning and obstacle avoidance. The goal is to formulate control policies for AUVs, empowering each vehicle to create its optimal collision-free path through adjustments in speed and heading. To ensure safe navigation of multiple AUVs, COLlision AVoidance (COLAV) plays a crucial role. Therefore, the implementation of a multi-layer region control strategy enhances the AUVs’ responsiveness to nearby obstacles, leading to improved COLAV. Furthermore, a reward function is formulated to consider four criteria: path planning, obstacle- and self-COLAV, as well as feasible control signals, with the aim of strengthening the proposed strategy. Notably, the devised scheme demonstrates robustness against disturbances A comparative study is conducted with the well-established Artificial Potential Field (APF) planning method. The simulation results indicate that the proposed system effectively and safely guides the AUVs to their goals and exhibits desirable generalizability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call