Abstract
Deep reinforcement learning methods have shown promising results in the development of adaptive traffic signal controllers. Accidents, weather conditions, or special events all have the potential to abruptly alter the traffic flow in real life. The traffic light must take immediate and appropriate action based on a reasonable understanding of the environment. In this way, traffic congestion would be prevented. In this paper, we develop a reliable controller for such a highly dynamic environment and investigate the resilience of these controllers to a variety of environmental disruptions, such as accidents. In this method, the agent is provided with a complete understanding of the environment by discretizing the intersection and modifying the state space. The proposed algorithm is independent of the location and time of accidents. If the location of the accident changes, the agent does not need to be retrained. The agent is trained using deep Q-learning and experience replay. The model is evaluated in the traffic microsimulator SUMO. The simulation results demonstrate that the proposed method is effective at shortening queues when there is disruption.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.