Abstract
AbstractReinforcement learning (RL) algorithms have been widely applied in solving traffic signal control problems. Traffic environments, however, are intrinsically nonstationary, which creates a convergence problem that RL algorithms struggle to overcome. Basically, as a target problem for an RL algorithm, the Markov decision process (MDP) can be solved only when both the transition and reward functions do not vary. Unfortunately, the environment for traffic signal control is not stationary since the goal of traffic signal control varies according to congestion levels. For unsaturated traffic conditions, the objective of traffic signal control should be to minimize vehicle delay. On the other hand, the objective must be to maximize the throughput when traffic flow is saturated. A multiregime analysis is possible for varying conditions, but classifying the traffic regime creates another complex task. The present study provides a meta‐RL algorithm that embeds a latent vector to recognize the different contexts of an environment in order to automatically classify traffic regimes and apply a customized reward for each context. In simulation experiments, the proposed meta‐RL algorithm succeeded in differentiating rewards according to the saturation level of traffic conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Computer-Aided Civil and Infrastructure Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.