Abstract

In general, the performance of model-based controllers cannot be guaranteed under model uncertainties or disturbances, while learning-based controllers require an extensively sufficient training process to perform well. These issues especially hold for large-scale nonlinear systems such as urban traffic networks. In this paper, a new framework is proposed by combining model predictive control (MPC) and reinforcement learning (RL) to provide desired performance for urban traffic networks even during the learning process, despite model uncertainties and disturbances. MPC and RL complement each other very well, since MPC provides a sub-optimal and constraint-satisfying control input while RL provides adaptive control laws and can handle uncertainties and disturbances. The resulting combined framework is applied for traffic signal control (TSC) of an urban traffic network. A case study is carried out to compare the performance of the proposed framework and other baseline controllers. Results show that the proposed combined framework outperforms conventional control methods under system uncertainties, in terms of reducing traffic congestion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call