Abstract

Robot soccer is one of the major domains for studying the coordination of multi-robot teams. Decentralized Partially Observable Markov Decision Process (Dec-POMDP) is a recent mathematical framework which has been used to model multi-agent coordination. In this work, we model simple robot soccer as Dec-POMDP and solve it using an algorithm which is based on the approach detailed in [1]. This algorithm uses finite state controllers to represent policies and searches the policy space with genetic algorithms. We use the TeamBots simulation environment. We use score difference of a game as a fitness and try to estimate it by running many simulations. We show that it is possible to model a robot soccer game as a Dec-POMDP and achieve satisfactory results. The trained policy wins almost all of the games against the standard TeamBots teams, and a reinforcement learning based team developed elsewhere.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.