Abstract

In this paper, we present a method to implement a navigation system for an intelligent agent that exists in a virtual world to generate collision-free motion. To observe the world, the agent uses virtual sensors consisting of the depth buffer information of a rendered image of the scene. This information is used to generate low-level collision avoidance, obstacle avoidance when moving to intermediate goals, and to create an accessibility graph and an obstacle map of the environment. The agent does not require access to the internal representation of the virtual world, which is similar to perception in mobile robots in the real world. Furthermore, the algorithm used is fast enough to work in real time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call