Abstract

Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve distributed optimization and control problems. This is done by translating the distributed problem into an iterated game, where each agent's mixed strategy (i.e. its stochastically determined move) sets a different variable of the problem. So the expected value of the objective function of the distributed problem is determined by the joint probability distribution across the moves of the agents. The mixed strategies of the agents are updated from one game iteration to the next so as to converge on a joint distribution that optimizes that expected value of the objective function. Here, a set of new techniques for this updating is presented. These and older techniques are then extended to apply to uncountable move spaces. We also present an extension of the approach to include (in)equality constraints over the underlying variables. Another contribution is that we show how to extend the Monte Carlo version of the approach to cases where some agents have no Monte Carlo samples for some of their moves, and derive an "automatic annealing schedule".

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call