Abstract

As one of the commonly used vehicles for underwater detection, underwater robots are facing a series of problems. The real underwater environment is large-scale, complex, real-time and dynamic, and many unknown obstacles may exist in the underwater environment. Under such complex conditions and lack of prior knowledge, the existing path planning methods are difficult to plan, therefore they cannot effectively meet the actual demands. In response to these problems, a three-dimensional marine environment including multiple obstacles is established with the real ocean current data in this paper, which is consistent with the actual application scenarios. Then, we propose an N-step Priority Double DQN (NPDDQN) path planning algorithm, which potently realizes obstacle avoidance in the complex environment. In addition, this study proposes an experience screening mechanism, which screens the explored positive experience and improves its reuse rate, thus efficiently improving the algorithm stability in the dynamic environment. This paper verifies the better performance of reinforcement learning compared with a variety of traditional methods in three-dimensional underwater path planning. Underwater robots based on the proposed method have good autonomy and stability, which provides a new method for path planning of underwater robots. <i>Note to Practitioners</i>&#x2014;The goal of this study is to provide a new solution for obstacle avoidance in path planning of underwater robots, which is consistent with the dynamic and real-time demands of the real environment. Existing underwater path planning researches lack a consistent environment with the actual application, and therefore we firstly construct a three-dimensional ocean environment with real ocean current data to provide support for the algorithms. Additionally, most of the algorithms are pre-planning methods or require long-time calculation, and there is little research on obstacle avoidance. In the face of obstacle changes, underwater robots with poor adaptability will cause performance decline and even economic losses. The proposed algorithm learns through interaction with the environment, and therefore it does not require any prior experience, and has good adaptability as well as fast inference speed. Especially, in the dynamic environment, algorithm performance is difficult to guarantee due to less positive experience in exploration. The proposed experience screening mechanism improves the stability of the algorithm, so that the underwater robot maintains stable performance in different dynamic environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call