Abstract

Reinforcement Learning (RL) has been proposed to deal with ramp control problems under dynamic traffic conditions; however, there is a lack of sufficient research on the behaviour and impacts of different learning parameters. This paper describes a ramp control agent based on the RL mechanism and thoroughly analyzed the influence of three learning parameters; namely, learning rate, discount rate and action selection parameter on the algorithm performance. Two indices for the learning speed and convergence stability were used to measure the algorithm performance, based on which a series of simulation-based experiments were designed and conducted by using a macroscopic traffic flow model. Simulation results showed that, compared with the discount rate, the learning rate and action selection parameter made more remarkable impacts on the algorithm performance. Based on the analysis, some suggestionsabout how to select suitable parameter values that can achieve a superior performance were provided.

Highlights

  • INTRODUCTIONAfter more than 50 years of application, ramp control (or ramp metering) has been identified as one of the most effective control methods on motorways [1]

  • After more than 50 years of application, ramp control has been identified as one of the most effective control methods on motorways [1]

  • Where N is the number of episodes after the benchmark is reached, TTSi is the total time spent corresponding to the i-th episode after the benchmark is reached, TTS from the last N

Read more

Summary

INTRODUCTION

After more than 50 years of application, ramp control (or ramp metering) has been identified as one of the most effective control methods on motorways [1]. In order to overcome these limitations, reinforcement learning (RL) was recently proposed by Jacob and Abdulhai [7, 8] to solve ramp control problems based on the Markov decision process (MDP) and dynamic programming (DP) After this contribution, some recent studies have shown the effectiveness of RL for ramp control under different settings and conditions. Some efforts have been made to explore the application of RL in the ramp control domain, the issues of how to set the parameters for RL based ramp control strategies and how these settings influence the algorithm performance have not been widely studied. The aim is to develop a ramp control agent following the RL mechanism, based on which the influence of learning parameters with different value settings is well analyzed.

REINFORCEMENT LEARNING
Q-learning
Action selection
Asymmetric cell transmission model
Agent design
Control algorithm
SIMULATION ANALYSIS
Experiment design
Results and discussion
Number
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call