Abstract

Aside from hand-coded bots, most of the videogame agents rely on experience in one way or another. Some agents improve over time by adjusting to real experience, while others make projections by leveraging simulated experience. In one sense, human play seems to resemble simulation, in that human players often try to visualize possible trajectories before deciding on a real course of action. However, most of the existing simulation-based agents require precise knowledge of the game's code, and their search depth is limited by the granular time scales encountered in videogames. Human players, on the other hand, appear to visualize plans in terms of abstract “skills,” such as running and jumping. These skills are learned from real experience, so,in this sense, human play resembles learning-based approaches. Motivated by these observations, we propose an approach that bridges the gap between skills, which are uncertain in outcome and duration, and traditional simulation-based planning, which is discrete. We apply this approach to maze-like navigation problems in Infinite Mario. After an initial skill acquisition phase, our agent is capable of navigating new levels without further training and also scales better with goal distance than a granular simulation method that exploits exact knowledge of the game's physics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.