The rapid growth in vehicle ownership has led to increased traffic congestion, making the need for autonomous driving solutions more urgent. Autonomous Vehicles (AVs) offer a promising solution to improve road safety and reduce traffic accidents by adapting to various driving conditions without human intervention. This research focuses on implementing Deep Q-Network (DQN) to enhance AV performance in different driving modes: safe, normal, and aggressive. DQN was selected for its ability to handle complex, dynamic environments through experience replay, asynchronous training, and epsilon-greedy exploration. We designed a simulation environment using the Highway-env platform and evaluated the DQN model under varying traffic densities. The performance of the AV was assessed based on two key metrics: success rate and total reward. Our findings show that the DQN model achieved a success rate of 90.75%, 94.625%, and 95.875% in safe, normal, and aggressive modes, respectively. Although the success rate increased with traffic intensity, the total reward remained lower in aggressive driving scenarios, indicating room for optimization in decision-making processes under highly dynamic conditions. This study demonstrates that DQN can adapt effectively to different driving needs, but further optimization is needed to enhance performance in more challenging environments. Future work will focus on improving the DQN algorithm to maximize both success rate and reward in high-traffic scenarios and testing the model in more diverse and complex environments.
Read full abstract