Abstract

The study aims to evaluate the performance of the transfer learning algorithm to enhance the transferability of a deep reinforcement learning-based variable speed limits (VSL) control. The Double Deep Q Network (DDQN)-based VSL control strategy is proposed for reducing total time spent (TTS) on freeways. A real merging bottleneck is developed in the simulation and considered for the VSL control as the source scenario. Three types of target scenarios are considered, including the overspeed scenarios, adverse weather scenarios, and diverse capacity drop scenarios. A stable testing demand and a fluctuating testing demand are adopted to evaluate the effects of VSL control. The results show that by updating the neural networks, the transfer learning in the DDQN-based VSL control agent successfully transfers knowledge learned in the source scenario to other target scenarios. With the transfer learning, the entire training process is shortened by 32.3% to 69.8%, while keeping a similar maximum reward level, as compared to the VSL control with full learning from scratch. With the transferred DDQN-based VSL strategy, the TTS is reduced by 26.02% to 67.37% with the stable testing demand and 21.31% to 69.98% with the fluctuating testing demand in various scenarios, respectively. The results also show that when the task similarity between the source scenario and target scenario is relatively low, the transfer learning could lead to local optimum and may not achieve the global optimal control effects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call