Abstract

The internal structure of buildings is becoming increasingly complex. Providing a scientific and reasonable evacuation route for trapped persons in a complex indoor environment is important for reducing casualties and property losses. In emergency and disaster relief environments, indoor path planning has great uncertainty and higher safety requirements. Q-learning is a value-based reinforcement learning algorithm that can complete path planning tasks through autonomous learning without establishing mathematical models and environmental maps. Therefore, we propose an indoor emergency path planning method based on the Q-learning optimization algorithm. First, a grid environment model is established. The discount rate of the exploration factor is used to optimize the Q-learning algorithm, and the exploration factor in the ε-greedy strategy is dynamically adjusted before selecting random actions to accelerate the convergence of the Q-learning algorithm in a large-scale grid environment. An indoor emergency path planning experiment based on the Q-learning optimization algorithm was carried out using simulated data and real indoor environment data. The proposed Q-learning optimization algorithm basically converges after 500 iterative learning rounds, which is nearly 2000 rounds higher than the convergence rate of the Q-learning algorithm. The SASRA algorithm has no obvious convergence trend in 5000 iterations of learning. The results show that the proposed Q-learning optimization algorithm is superior to the SARSA algorithm and the classic Q-learning algorithm in terms of solving time and convergence speed when planning the shortest path in a grid environment. The convergence speed of the proposed Q- learning optimization algorithm is approximately five times faster than that of the classic Q- learning algorithm. The proposed Q-learning optimization algorithm in the grid environment can successfully plan the shortest path to avoid obstacle areas in a short time.

Highlights

  • In recent years, with the advancement of urbanization, the internal structure of urban buildings has become more complex and variable

  • The results show that the Q-learning optimization algorithm is better than both the SARSA algorithm and the Q-learning algorithm in terms of solving time and convergence when planning the shortest path in a grid environment

  • The rest of the paper is organized as below: Section 1 introduces indoor emergency path planning based on the proposed Q-learning optimization algorithm in grid environment

Read more

Summary

Introduction

With the advancement of urbanization, the internal structure of urban buildings has become more complex and variable. On the basis of Qlearning combined with ε-greedy, Li C et al [24] proposed a parameter dynamic adjustment strategy and trial-and-error action deletion mechanism, which realized the balance between adaptive adjustment and utilization in the learning process, and improved the exploration efficiency of the agent. This paper proposes a path planning algorithm based on a grid environment and optimizes the Q-learning algorithm by introducing the calculation of the exploratory factor discount rate. 2. Aimed at the problems of slow convergence speed and low accuracy of the Q-learning algorithm in a large-scale grid environment, the exploration factor in the ε-greedy strategy is dynamically adjusted, and the discount rate variable of the exploration factor is introduced. The rest of the paper is organized as below: Section 1 introduces indoor emergency path planning based on the proposed Q-learning optimization algorithm in grid environment.

Indoor Emergency Path Planning Method
Method
22.22. QQ--LLeeaarrnniinngg OOppttiimmiizzaattiioonn AAllggoorriitthhm
Path Planning Strategy
Action and state of agent
Set the reward function
Action strategy selection
Q Value Table
Dynamic Adjustment of Exploration Factors
Algorithm Flow
Algorithm Simulation Experimental Analysis
Environmental Spatial Modeling
Comparison and Analysis of Experimental Results
Simulation Scene Experiment Analysis
Experimental Data and Scene Construction
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.