This study develops a visual-based docking system (VDS) for an autonomous underwater vehicle (AUV), significantly enhancing docking performance by integrating intelligent object recognition and deep reinforcement learning (DRL). The system overcomes traditional navigation limitations in complex and unpredictable environments by using a variable information dock (VID) for precise multi-sensor docking recognition in the AUV. Employing image-based visual servoing (IBVS) technology, the VDS efficiently converts 2D visual data into accurate 3D motion control commands. It integrates the YOLO (short for You Only Look Once) algorithm for object recognition and the deep deterministic policy gradient (DDPG) algorithm, improving continuous motion control, docking accuracy, and adaptability. Experimental validation at the National Cheng Kung University towing tank demonstrates that the VDS enhances control stability and operational reliability, reducing the mean absolute error (MAE) in depth control by 42.03% and pitch control by 98.02% compared to the previous method. These results confirm the VDS's reliability and its potential for transforming AUV docking.
Read full abstract