Abstract

Abstract The obstacle avoidance problem of autonomous surface vessels (ASV) has attracted the attention of the marine control research community for long years. Out of safety consideration, it is important for ASV to avoid all kinds of obstacles like shores, cliffs, floaters and other vessels. Developing a heading and path planning strategy for ASV is the main task and the remaining challenge. Traditional obstacle avoidance algorithms lead to too much computing in working environment. The issue of computation cost can be solved by training obstacle avoidance models with reinforcement learning (RL). By using the RL method, the ASV will choose the most efficient action according to the ASV’s experience it learned from the past. In this paper, RL is adopted to design a decision-making agent for obstacle avoidance. To train the obstacle avoidance model under a sparse feedback environment, hierarchical reinforcement learning (HRL) method is applied. Using this algorithm, better obstacle avoidance performance and longer survival time can be achieved. Memory pool modification and target network modification are also used to smooth the training process of the ASV. Simulation results demonstrate that HRL can make the learning process of un-manned ship’s obstacle avoidance smoother and more effective. Also, the living time of ASVs is improved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.