Abstract

Realising adaptive traffic signal control (ATSC) through reinforcement learning (RL) is an important means to easetraffic congestion. This paper finds the computing power of the central processing unit (CPU) cannot fully usedwhen Simulation of Urban MObility (SUMO) is used as an environment simulator for RL. We propose a multi-process framework under value-basedRL. First, we propose a shared memory mechanism to improve exploration efficiency. Second, we use the weight sharing mechanism to solve the problem of asynchronous multi-process agents. We also explained the reason shared memory in ATSC does not lead to early local optima of the agent. We have verified in experiments the sampling efficiency of the 10-process method is 8.259 times that of the single process. The sampling efficiency of the 20-process method is 13.409 times that of the single process. Moreover, the agent can also converge to the optimal solution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call