Abstract

As traffic congestion grows tremendous and frequent in the urban transportation system, many efficient models with reinforcement learning (RL) methods have already been proposed to optimize this situation. A multi-agent reinforcement learning (MARL) system can be constructed from the traffic problem, where the incoming links (i.e., sections) are regarded as agents and the actions made by the agents are for controlling signal lights. A semi-cooperative Nash Q-learning approach on the basis of single-agent Q-learning and Nash equilibrium is proposed and presented in this paper, in which the agents agree on the process of action selection by Nash equilibrium, but strive finally for a common goal with cooperative behaviour when more than one Nash equilibriums exist. Then an extended version called semi-cooperative Stackelberg Q-learning is designed to make a comparison, where Nash equilibrium is replaced by Stackelberg equilibrium in the Q-learning process. Specifically, the agent who has the largest queues will be promoted as a leader and the others are followers who react to the leader’s decision. Instead of adjusting the plan of green light timing published in other research, this paper is contributing to finding the best multi-routes plan for passing most vehicles in a single traffic intersection, with combining game theory and RL in decision-making in the multi-agent framework. These two multi-agent Q-learning methods are implemented and compared with the constant strategy (i.e., the time intervals of green or red lights are fixed and periodical). The simulated result shows that the performance of semi-cooperative Stackelberg Q-learning is better.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call