Abstract

Nonrecurring disruptions to traffic systems caused by incidents or adverse conditions can result in uncertain travel times. Real-time information allows travelers to adapt to actual traffic conditions. In a behavior experiment, subjects completed 120 “days” of repeated route choices in a hypothetical, competitive network submitted to random capacity reductions. One scenario provided subjects with real-time information regarding a probable incident and the other did not. A reinforcement learning model with two scale factors, a discounting rate of previous experience and a constant term, is estimated by minimizing the deviation between predicted and observed daily flows. The estimation combines brute force enumeration and a subsequent stochastic approximation method. The prediction over 120 runs has a root mean square error of 1.05 per day per route and a bias of 0.14 per route.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.