The need for a safe and reliable transportation system has made the advancement of autonomous vehicles (Avs) increasingly significant. To achieve Level 5 autonomy, as defined by the Society of Automotive Engineers, AVs must be capable of navigating complex and unconventional traffic environments. Path-following is a crucial task in autonomous driving, requiring precise and safe navigation along a defined path. Traditional path-tracking methods often rely on parameter tuning or rule-based approaches, which may not be suitable for dynamic and complex environments. Reinforcement learning has emerged as a powerful technique for developing effective control strategies through agent-environment interactions. This study investigates the efficiency of an optimized Deep Deterministic Policy Gradient (DDPG) method for controlling acceleration and steering in the path-following of autonomous vehicles. The algorithm demonstrates rapid convergence, enabling stable and efficient path tracking. Additionally, the trained agent achieves smooth control without extreme actions. The performance of the optimized DDPG is compared with the standard DDPG algorithm, with results confirming the improved efficiency of the optimized approach. This advancement could significantly contribute to the development of autonomous driving technology.