Abstract

Physics-based games are vast in terms of possible state spaces. There are many strategies that can be implemented in competitive games such as playing passively and waiting for the opponent to make a mistake, playing aggressively to force mistakes from the opponent, or even using environmental objects to an agent’s advantage. The vastness of possibilities makes it difficult for a programmer to account for all these situations and create a rule-based intelligent and believable hard-coded AI agent. This project seeks to take advantage of reinforcement learning to create agents that can adapt to dynamically changing physics-based environments such as the example of competitive vehicular soccer games. It seeks to produce believable agents that perform intrinsic behaviors such as defending their goal and attacking the ball using reward functions. Through trial-and-error, the reward function is modified to progressively form behavioral patterns that improve in performance. The performance tests prove that a reward function that considers different state space parameters can produce better performing agents compared to ones with a less defined reward function and state space. Moreover, the final agent trained through the experiments has proved to be believable and hard to distinguish from a human player.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.