Abstract

Several issues in designing a vehicle platoon control system must be considered; among them, the speed consensus and space/gap regulation between the vehicles play the primary role. In addition, reliable and fast gap-closing/opening actions are highly recommended for establishing a platoon system. Nonetheless, the lack of research on designing a single algorithm capable of simultaneously coping with speed-tracking and maintaining a secure headway, as well as the gap-closing/opening problems, is apparent. As deep reinforcement learning (DRL) applications in driving strategies are promising, this paper develops a multi-task deep deterministic policy gradient (DDPG) car-following algorithm in a platoon system. The proposed approach combines gap closing/opening with a unified platoon control strategy; as such, an effective virtual inter-vehicle distance is employed in the developed DRL-based platoon controller reward. This innovative new distance definition, which is based on the action taken by the ego-vehicle, leads to a precise comprehension of the agent’s actions. Moreover, by imposing a specific constraint on a variation of the ego-vehicle’s relative speed with respect to its predecessor, the speed chattering of the ego-vehicle is reduced. The developed algorithm is implemented in the realistic traffic simulator, SUMO (Simulation of Urban Mobility), and the performance of the developed control strategy is evaluated under different traffic scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call