In recent years, autonomous underwater vehicles (AUVs) have witnessed rapid development, and its motion control has garnered increasing attention. Meanwhile, in industries, PID controllers are still wildly employed by most AUVs due to their simplicity, ease of deployment, and a certain level of robustness. However, they are facing significant challenges in parameter tuning, especially when dealing with various control missions and changing external environments. Deep reinforcement learning, as a data-driven approach, has gradually made its impact in AUV control. However, its lack of interpretability has hindered its deployment in relevant experiments. To address these issues, this paper proposed an adaptive PID controller for path following of AUVs based on the Soft Actor–Critic (SAC). This controller combines the interpretability of PID with the intelligence of reinforcement learning. A simulation platform was established and compared with other typical control methods, demonstrating the superiority of the proposed controller. Finally, the feasibility of the proposed SAC-PID controller was validated by lake trials. The results showed that the SAC-PID controller significantly outperformed the PID and Proximal Policy Optimization (PPO) PID controllers in multiple dimensions, such as control precision and convergence speed.