Abstract

This study presents the application of XCS learning algorithm in simulating drivers' lane selection behaviour in microscopic simulation models of toll plazas. XCS is a special case of learning classifier systems, a machine learning technique that ties reinforcement learning (RL) and genetic algorithm. The proposed formulation translates an agent's lane selection decision into a learning problem, assuming that its ultimate objective is to reduce delay and crash risk. The agent is simulated without any notion of the outcome of its decisions. Through multiple learning episodes and the outcome of its actions, it learns the best policy to implement in a given network setting. A hypothetical toll plaza simulation network developed in Paramics simulation software is used to conduct experimental analyses. The results demonstrate that the agent's toll lane selection decision yields superior results in terms of delay and crash risk compared with those of minimum queue lane selection, minimum risk lane selection, random lane selection and multinomial probit model-based lane selection behaviours. Although both the delay and crash risk objectives are not used simultaneously during learning, this study is intended as a proof of concept to demonstrate the feasibility of implementing RL algorithms in microscopic traffic simulation models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call