Abstract

Connected Autonomous vehicles (CAVs) are expected to improve the safety and efficiency of traffic by automating driving tasks. Amongst those, lane changing is particularly challenging, as it requires the vehicle to be aware of its highly-dynamic surrounding environment, make decisions, and enact them within very short time windows. As CAVs need to optimise their actions based on a large set of data collected from the environment, Reinforcement Learning (RL) has been widely used to develop CAV motion controllers. These controllers learn to make efficient and safe lane changing decisions using on-board sensors and inter-vehicle communication. This paper, first presents four overlapping fields that are key to the future of safe self-driving cars: CAVs, motion control, RL, and safe control. It then defines the requirements for a safe CAV controller. These are used firstly to compare applications of Multi-Agent Reinforcement Learning (MARL) to CAV lane change controllers. The requirements are then used to evaluate state-of-the-art safety methods used for RL-based motion controllers. The final section summarises research gaps and possible opportunities for the future development of safe MARL-based CAV motion controllers. In particular, it highlights the requirement to design MARL controllers with continuous control for lane changing. Moreover, as RL algorithms by themselves do not guarantee the level of safety required for such safety-critical applications, it offers insights and challenges to integrate safe RL methods with MARL-based CAV motion controllers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call