Abstract

In the field of ocean energy detection, Autonomous Underwater Vehicles (AUVs) offer significant advantages in terms of manpower, resource, and energy efficiency. However, the unpredictable nature of the ocean environment, particularly the real-time changes in ocean currents, poses navigational risks for AUVs. Therefore, effective path planning in dynamic environments is crucial for AUVs to perform specific tasks. This paper addresses the static path planning problem and proposes a model called the noise net double DQN network with prioritized experience replay (N-DDQNP). The N-DDQNP model combines a noise network and a prioritized experience replay mechanism to address the limited exploration and slow convergence speed issues of the DQN algorithm, which are caused by the greedy strategy and uniform sampling mechanism. The proposed approach involves constructing a double DQN network with a priority experience replay and an exploration mechanism using the noise network. Second, a compound reward function is formulated to take into account ocean current, distance, and safety factors, ensuring prompt feedback during the training process. Regarding the ocean current, the reward function is designed based on the angle between the current direction and the AUV's heading direction, considering its impact on the AUV's speed. As for the distance factor, the reward is determined by the Euclidean distance between the current position and the target point. Furthermore, the safety factor considers whether the AUV may collide with obstacles. By incorporating these three factors, the compound reward function is established. To evaluate the performance of the N-DDQNP model, experiments were conducted using real ocean data in various complex ocean environments. The results demonstrate that the path planning time of the N-DDQNP model outperforms other algorithms in different ocean current scenarios and obstacle environments. Furthermore, a user console-AUV connection has been established using spice cloud desktop technology. The cloud desktop architecture enables intuitive observation of the AUV's navigation posture and the surrounding marine environment, facilitating safer and more efficient underwater exploration and marine resource detection tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call