Abstract

The rapid advancements of Internet of Things (IoT) and Artificial Intelligence (AI) have catalyzed the development of adaptive traffic control systems (ATCS) for smart cities. In particular, deep reinforcement learning (DRL) models produce state-of-the-art performance and have great potential for practical applications. In the existing DRL-based ATCS, the controlled signals collect traffic state information from nearby vehicles, and then optimal actions (e.g., switching phases) can be determined based on the collected information. The DRL models fully “trust” that vehicles are sending the true information to the traffic signals, making the ATCS vulnerable to adversarial attacks with falsified information. In view of this, this article first time formulates a novel task in which a group of vehicles can cooperatively send falsified information to “cheat” DRL-based ATCS in order to save their total travel time. To solve the proposed task, we develop CollusionVeh , a generic and effective vehicle-colluding framework composed of a road situation encoder, a vehicle interpreter, and a communication mechanism. We employ our framework to attack established DRL-based ATCS and demonstrate that the total travel time for the colluding vehicles can be significantly reduced with a reasonable number of learning episodes, and the colluding effect will decrease if the number of colluding vehicles increases. Additionally, insights and suggestions for the real-world deployment of DRL-based ATCS are provided. The research outcomes could help improve the reliability and robustness of the ATCS and better protect the smart mobility systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call