Abstract
Improving the resilience of urban road networks suffering from various disruptions has been a central focus for urban emergence management. However, to date the effective methods which may mitigate the negative impacts caused by the disruptions, such as road accidents and natural disasters, on urban road networks is highly insufficient. This study proposes a novel adaptive signal control strategy based on a doubly dynamic learning framework, which consists of deep reinforcement learning and day-to-day traffic dynamic learning, to improve the network performance by adjusting red/green time split. In this study, red time split is regarded as extra traffic flow to discourage drivers to use affected roads, so as to reduce congestion and improve the resilience when urban road networks are subject to different levels of disruptions. In addition, we utilize the convolution neural network as Q-network to approximate Q values, link flow distribution and link capacity are regarded as the state space, and actions are denoted as red/green time split. A small network is utilized as a numerical example, and a fixed time signal control and other two adaptive signal controls are employed for the comparisons with the proposed one. The results show that the proposed adaptive signal control based on deep reinforcement learning can achieve better resilience in most of the cases, particularly in the scenarios of moderate and severe disruptions. This study may shed light on the advantages of the proposed adaptive signal control dealing with major emergencies compared to others.
Highlights
It is widely accepted that urban road networks (URNs) underpin the prosperity of our society and economy, while URNs are exposed to various internal or external disruptions [1, 2]
E aim of this study is to propose a novel adaptive signal control (ASC) strategy based on the DTD dynamic model and deep reinforcement learning (DRL) so as to improve the resilience of URNs suffering from different levels of disruptions, which captures the day-to-day learning behaviours of drivers and complex nature of traffic flow evolution and signal setting at intersections
In order to improve the resilience of URNs when experiencing different levels of disruptions, this study proposes a novel adaptive signal control (ASC) strategy based on a doubly dynamic learning framework, which combines the DTD traffic dynamic model with deep reinforcement learning (DRL). is novel signal control takes into account the drivers’ day-to-day learning process on route perceptions and ASC’s learning mechanism on the flow distributions
Summary
It is widely accepted that urban road networks (URNs) underpin the prosperity of our society and economy, while URNs are exposed to various internal or external disruptions [1, 2]. Is study proposes a novel adaptive signal control method based on a doubly dynamic learning framework to improve the resilience of URNs when suffering from disruptions, and this learning framework consists of day-to-day traffic dynamic model and deep reinforcement learning. The resilience of URNs is observed from the perspective of the evolution of day-to-day traffic and quantified with the RAI index; various signal controls and distinct learning process are incorporated into the model in order to demonstrate how adaptive signal controls (ASCs) improve the resilience by adjusting red/ green time split when suffering from disruptions. E aim of this study is to propose a novel adaptive signal control (ASC) strategy based on the DTD dynamic model and deep reinforcement learning (DRL) so as to improve the resilience of URNs suffering from different levels of disruptions, which captures the day-to-day learning behaviours of drivers and complex nature of traffic flow evolution and signal setting at intersections. Two existing adaptive signal controls and a relative area index (RAI) used for quantifying resilience of URNs are introduced briefly
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.