Abstract

AbstractWe propose a virtual crowd navigation approach based on deep reinforcement learning to improve the adaptability of virtual crowds in an unknown and complex environment. To address the problem of local optimum or slow iteration or even failure to converge due to sparse rewards in complex environments, we integrate the curiosity‐driven mechanism, the key navigation points acquisition and the failure path penalty method in addition to combining long short‐term memory networks, dynamic obstacle collision prediction with proximal policy optimization algorithms, which realizes the crowd navigation in a complex environment. The experimental results show that the proposed approach can simulate the motions of virtual crowds in various dynamic and complex scenarios without the environment modeling. The use of continuous action space also ensures that the movement trajectories of the virtual crowds are more realistic and natural. Furthermore, our approach can provide analysis and demonstration tools for a variety of independent collaborations such as competition and cooperation of group intelligence in an open and dynamic environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call