Abstract

This paper proposes a novel human-centric approach to enhance decision-making for autonomous vehicles in complex urban driving situations by integrating Deep Q-Network (DQN) reinforcement learning and social value orientation. In the proposed method, a deep neural network (DNN) is employed to approximate the optimal Q-values for various states and actions in the space of possible actions and reachable states. In order to improve the optimization convergence, an Adam optimization is proposed by combining the advantages of adaptive learning rates and momentum methods. The proposed framework also incorporates a collision avoidance component that allows vehicles to navigate safely through pedestrian crossings. The proposed method is validated through simulation experiments, which show that the proposed approach outperforms traditional decision-making and RL methods in terms of safety and efficiency. Finally, the results demonstrate that integrating social value orientation and DQN-RL can lead to more human-like and socially compliant decision-making frameworks for automated vehicles. This research contributes to developing a new human-centric cyber-physical approach for automated vehicle decision-making and has significant implications for designing future intelligent transportation systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call