Abstract

Recent developments in multiagent reinforcement learning, mostly concentrate on normal form games or restrictive hierarchical form games. In this paper, we use the well known Q-learning in extensive form games which agents have a fixed priority in action selection. We also introduce a new concept called associative Q-values which not only can be used in action selection, leading to a subgame perfect equilibrium, but also can be used in update rule which is proved to be convergent. Associative Q-values are the expected utility of an agent in a game situation which is an estimate of the value of the subgame perfect equilibrium point.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.