The potential of autonomous underwater vehicle (AUV) on future applications is significant due to advancements in autonomy and intelligence. Path planning is a critical technology for AUV to perform operational missions in complex marine environments. To this end, this paper proposes a path planning method for AUV based on deep reinforcement learning. Initially, considering actual requirements, a complex marine environment model containing underwater terrain, sonobuoy detection, and ocean currents is established. Subsequently, the corresponding state space, action space, and reward function are formulated. Furthermore, to address the inherent limitations of existing deep reinforcement learning algorithms in terms of training efficiency, a mixed experience replay (MER) strategy is proposed. This strategy aims to enhance the efficiency of sample learning by integrating prior knowledge and exploration experience. Lastly, a novel HMER-SAC algorithm for AUV path planning is proposed by integrating the Soft Actor–Critic (SAC) algorithm with the hierarchical reinforcement learning strategy and the MER strategy. The results of the simulation and experiment demonstrate that the method is capable of efficiently planning executable paths in complex marine environments and exhibits superior training efficiency, stability, and performance.
Read full abstract