Abstract

This paper introduces a learning-based solution tailored for the integrated motion planning and control of Multiple Autonomous Underwater Vehicles (AUVs). Tackling the complexities of cooperative motion planning, encompassing tasks such as waypoint tracking and self/obstacle collision avoidance, becomes challenging in a rule-based algorithmic paradigm due to the diverse and unpredictable situations encountered, necessitating a proliferation of if-then conditions in the implementation. Recognizing the limitations of traditional approaches that are heavily dependent on models and geometry of the system, our solution ofers an innovative paradigm shift. This study proposes an integrated motion planning and control strategy that leverages sensor and navigation outputs to generate longitudinal and lateral control outputs dynamically. At the heart of this cutting-edge methodology lies a continuous action Deep Reinforcement Learning (DRL) framework, specifically based on the Twin Delayed Deep Deterministic Policy Gradient (TD3). This algorithm surpasses traditional limitations by embodying an elaborated reward function, enabling the seamless execution of control actions essential for maneuvering multiple AUVs. Through simulation tests under both nominal and perturbed conditions, considering obstacles and underwater current disturbances, the obtained results demonstrate the feasibility and robustness of the proposed technique.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.