Abstract

AbstractConventional simulations on multi‐exit indoor evacuation focus primarily on how to determine a reasonable exit based on numerous factors in a changing environment. Results commonly include some congested and other under‐utilized exits, especially with large numbers of pedestrians. We propose a multi‐exit evacuation simulation based on deep reinforcement learning (DRL), referred to as the MultiExit‐DRL, which involves a deep neural network (DNN) framework to facilitate state‐to‐action mapping. The DNN framework applies Rainbow Deep Q‐Network (DQN), a DRL algorithm that integrates several advanced DQN methods, to improve data utilization and algorithm stability and further divides the action space into eight isometric directions for possible pedestrian choices. We compare MultiExit‐DRL with two conventional multi‐exit evacuation simulation models in three separate scenarios: varying pedestrian distribution ratios; varying exit width ratios; and varying open schedules for an exit. The results show that MultiExit‐DRL presents great learning efficiency while reducing the total number of evacuation frames in all designed experiments. In addition, the integration of DRL allows pedestrians to explore other potential exits and helps determine optimal directions, leading to a high efficiency of exit utilization.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.