Abstract

Abstract Recent studies in pedestrian simulation have been able to construct a highly realistic navigation behaviour in many circumstances. However, when replicating the close interactions between pedestrians, the replicated behaviour is often unnatural and lacks human likeness. One of the possible reasons is that the current models often ignore the cognitive factors in the human thinking process. Another reason is that many models try to approach the problem by optimising certain objectives. On the other hand, in real life, humans do not always take the most optimised decisions, particularly when interacting with other people. To improve the navigation behaviour in this circumstance, we proposed a pedestrian interacting model using reinforcement learning. Additionally, a novel cognitive prediction model, inspired by the predictive system of human cognition, is also incorporated. This helps the pedestrian agent in our model to learn to interact and predict the movement in a similar practice as humans. In our experimental results, when compared to other models, the path taken by our model’s agent is not the most optimised in certain aspects like path lengths, time taken and collisions. However, our model is able to demonstrate a more natural and human-like navigation behaviour, particularly in complex interaction settings.

Highlights

  • Constructing a human-like pedestrian navigation model is a problem that requires much attention from many research fields

  • Another example is the studies in urban planning, in which the pedestrian navigation behaviour needs to be constructed to analyse the possible activities of the people moving in the area [4,5]

  • Utilising an optimisation method on a navigation matrix using maximum neighbourhood value to simulate the agent’s navigation towards a possible opened door Pedestrian agent navigates by partitioning the navigation path into multiple sections passing certain points following the sub-goal concept Reinforcement learning could be employed for the pedestrian agent to perform basic obstacle avoidance actions Pedestrian agent learns the navigation and obstacle avoidance behaviour using a deep reinforcement learning framework A deep reinforcement learning model for pedestrian agents for interacting circumstances such as passing, overtaking, and overtaking The risk from the obstacle could significantly affect the path planned by the pedestrian before navigation A prediction model in the path planning process of the pedestrian agent

Read more

Summary

Introduction

Constructing a human-like pedestrian navigation model is a problem that requires much attention from many research fields. The studies in the robotics domain, for example, are required to address this problem to build robots that are capable of manoeuvring in real-world environments [1,2,3] Another example is the studies in urban planning, in which the pedestrian navigation behaviour needs to be constructed to analyse the possible activities of the people moving in the area [4,5]. Other approaches replicate the pedestrian behaviour by using rule-based models [8] or more recently, using neural networks [9] They usually aim at optimising certain objectives, such as shortest path or minimising the number of collisions. The agent needs to optimise the policy, which specifies the actions that will be taken under each state of the observed environment. The value function for the state s would be presented as:

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call