Abstract

Multi-agent reinforcement learning (MARL) based methods for adaptive traffic signal control (ATSC) have shown promising potentials to solve the heavy traffic problems. The existing MARL methods adopt centralized or distributed strategies. The former only models the environment as an agent and suffers from the exponential growth of action and state space. The latter extends the independent reinforcement learning methods, such as DQN, to multiple interactions directly or propagates information, such as state and policy, without taking their qualities into account. In this paper, we propose a multi-agent transfer reinforcement learning method to enhance the performance of MARL for ATSC, which is termed as multi-agent transfer soft actor-critic with the multi-view encoder (MT-SAC). The MT-SAC combines centralized and distributed strategies. In MT-SAC, we propose a multi-view state encoder and a transfer learning paradigm with guidance. The encoder processes input states from multiple perspectives and uses an attention mechanism to weigh the neighborhood information. While the paradigm enables the agents to handle different conditions for improving generalization abilities by transfer learning. Experimental studies on different scale road networks show that the MT-SAC outperforms the state-of-the-art algorithms and makes the traffic signal controllers more collaborative and robust.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.