Abstract

Enhancing the efficiency of unmanned surface vehicles (USVs) collision avoidance can yield a significant impact, as it can result in safer navigation and lower energy consumption. This paper introduces a robust approach employing deep reinforcement learning theory to facilitate informed collision avoidance decisions within intricate maritime environments. The restrictions on USV maneuverability and international regulations for preventing collisions at sea are studied and quantified, particularly focusing on the shape and size changes of the ship’s domain caused by USV speed. Based on the deep Q network, an improved methodology is designed, incorporating a noisy network, prioritized experience replay, dueling neural network architecture, and double Q learning, resulting in a highly efficient sampling, exploration, and learning process. To curtail computational expenses associated with USVs, a novel dynamic area restriction technique is proposed. Furthermore, an innovative USV state clipping method is introduced to mitigate training complexities. By utilizing the Unity platform, a virtual environment characterized by complexity and stochasticity is constructed for training and testing the collision avoidance of USVs This novel approach surpasses the performance of the pre-improvement algorithm across multiple collision avoidance effectiveness indicators and performance metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call