Abstract

This paper studies a guidance and control framework of multiple autonomous surface underwater vehicles (multi-ASUV) based on deep reinforcement learning (DRL) for target tracking. The framework enables the vehicles to complete the standoff tracking and sampling tasks in a predetermined circular trajectory centered on the target and maintain predetermined relative positions during the process to obtain high-precision spatio-temporal synchronization data. We design an end-to-end architecture that maps the sensor inputs to control commands and develop autonomous capable of achieving the hybrid objective of cooperative guidance, standoff tracking, and dynamic obstacle avoidance without having prior knowledge about the goal or the environment. The results demonstrate the feasibility of the end-to-end DRL method with higher accuracy than the traditional “guidance-control” two-step method. Meanwhile, the obstacle avoidance and standoff tracking experiment for swarm and the sampling experiment in the mesoscale eddy area are stimulated further to verify the proposed framework’s effectiveness and robust ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call