Abstract

Deep reinforcement learning (DRL) has been proved to be more suitable than reinforcement learning for path planning in large-scale scenarios. In order to more effectively complete the DRL-based collaborative path planning in crowd evacuation, it is necessary to consider the space expansion problem brought by the increase of the number of agents. In addition, it is often faced with complicated circumstances, such as exit selection and congestion in crowd evacuation. However, few existing works have integrated these two aspects jointly. To solve this problem, we propose a planning approach for crowd evacuation based on the improved DRL algorithm, which will improve evacuation efficiency for large-scale crowd path planning. First, we propose a framework of congestion detection-based multi-agent reinforcement learning, the framework divides the crowd into leaders and followers and simulates leaders with a multi-agent system, it considers the congestion detection area is set up to evaluate the degree of congestion at each exit. Next, under the specification of this framework, we propose the improved Multi-Agent Deep Deterministic Policy Gradient (IMADDPG) algorithm, which adds the mean field network to maximize the returns of other agents, enables all agents to maximize the performance of a collaborative planning task in our training period. Then, we implement the hierarchical path planning method, which upper layer is based on the IMADDPG algorithm to solve the global path, and lower layer uses the reciprocal velocity obstacles method to avoid collisions in crowds. Finally, we simulate the proposed method with the crowd simulation system. The experimental results show the effectiveness of our method.

Highlights

  • Planning an evacuation path to reduce the evacuation time in densely populated areas, especially complex environments with obstacles and multiple exits, is one of the important issues of evacuation simulations caused by emergency disasters

  • FRAMEWORK OF multi-agent reinforcement learning (MARL) BASED ON CONGESTION DETECTION we propose a framework of congestion detection-based MARL to prepare for our path planning method

  • We propose a hierarchical path planning method based on improved Deep reinforcement learning (DRL) algorithm to search collaboratively for the optimal evacuation path

Read more

Summary

INTRODUCTION

Planning an evacuation path to reduce the evacuation time in densely populated areas, especially complex environments with obstacles and multiple exits, is one of the important issues of evacuation simulations caused by emergency disasters. (3) A hierarchical path planning method applied in the field of crowd evacuation is proposed to reduce evacuation time, which couples the IMADDPG and the RVO algorithms under the framework of congestion detection-based MARL. Cruz and Yu [35] proposed a method that combines kernel smoothing and the DRL of the WoLF- Policy Hill Climbing algorithm to solve the difficulty of traditional reinforcement learning in path planning in an unfamiliar environment. In [37], Sui et al proposed a method similar to ours in which they set the reward function and used the DRL algorithm to implement the path planning for pedestrians Their approach differs from our approach in the following main aspects. OpenAI Gym provides a unified environment interface and a platform for rebuilding a new environment, and it encapsulates commonly used functions

FRAMEWORK OF MARL BASED ON CONGESTION DETECTION
A PATH PLANNING APPROACH BASED ON DRL
THE UPPER-LAYER GLOBAL PATH PLANNING BASED ON IMADDPG
EXPERIMENT AND ANALYSIS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call