Abstract

AbstractReinforcement learning (RL) algorithms have been widely applied in solving traffic signal control problems. Traffic environments, however, are intrinsically nonstationary, which creates a convergence problem that RL algorithms struggle to overcome. Basically, as a target problem for an RL algorithm, the Markov decision process (MDP) can be solved only when both the transition and reward functions do not vary. Unfortunately, the environment for traffic signal control is not stationary since the goal of traffic signal control varies according to congestion levels. For unsaturated traffic conditions, the objective of traffic signal control should be to minimize vehicle delay. On the other hand, the objective must be to maximize the throughput when traffic flow is saturated. A multiregime analysis is possible for varying conditions, but classifying the traffic regime creates another complex task. The present study provides a meta‐RL algorithm that embeds a latent vector to recognize the different contexts of an environment in order to automatically classify traffic regimes and apply a customized reward for each context. In simulation experiments, the proposed meta‐RL algorithm succeeded in differentiating rewards according to the saturation level of traffic conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call