Abstract

The integration of computer vision, machine learning algorithm and crowd evacuation simulation technology has become a new focus of research. At present, many studies in this aspect only extract related data in a specific scene to train the neural network. Since there is no data and corresponding input of multiple scenes, the trained model can only be applied to a single scene. In addition, using the data obtained from the normal human movement to train the neural network cannot accurately reflect the real emergency evacuation movement characteristics. In order to solve the above problems, firstly, a real earthquake evacuation video set containing multiple scenes were collected and sorted out in this paper. Secondly, this paper proposes a coordinate transformation method suitable for non-calibrated video images. This method, together with the multiple object tracking algorithm combined of YOLOv4 and Deep-SORT, and 17 parameters representing evacuation movement characteristics defined in this study, constitutes a method to automatically extract the evacuation movement characteristic parameters in batches from non-calibrated earthquake evacuation videos. Finally, an evacuation velocity vector classification prediction model based on deep convolutional neural network (CNN-VCPM) is established, which studies the influence of environment factors in evacuation scenes on velocity decision-making at microscopic moment. Compared with back propagation neural network, CNN-VCPM prediction model can more accurately fit the movement behavior of the crowd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call