Abstract

In the past decade, reinforcement learning (RL) has achieved encouraging results in autonomous driving, especially in well-structured and regulated highway environments. However, few researches pay attention to RL-based multiple-vehicles cooperative driving, which is much more challenging because of dynamic real-time interactions and transient scenarios. This paper proposes a Multi-Agent Reinforcement Learning (MARL) based twin-vehicles cooperative driving decision making method which achieves the generalization adaptation of the RL method in highly dynamic highway environments and enhances the flexibility and effectiveness of collaborative decision making system. The proposed fair cooperative MARL method pays equal attention to the individual intelligence and the cooperative performance, and employs a stable estimation method to reduce the propagation of overestimated joint <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$Q$</tex-math></inline-formula> -values between agents. Thus, the twin-vehicles system strikes a balance between maintaining formation and free overtaking in dynamic highway environments, to intelligently adapt to different scenarios, such as heavy traffic, loose traffic, even some emergency. Targeted experiments show that our method has strong cooperative performance, also further increases the possibility of creating a harmonious driving environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call