Abstract

We consider in this paper a general class of discrete-time partially-observed mean-field games with Polish state, action, and measurement spaces and with risk-sensitive (exponential) cost functions which capture the risk-averse behaviour of each agent. As standard in mean-field game models, here each agent is weakly coupled with the rest of the population through its individual cost and state dynamics via the empirical distribution of the states. We first establish the mean-field equilibrium in the infinite-population limit by first transforming the risk-sensitive problem to one with risk-neutral (that is, additive instead of multiplicative) cost function, and then employing the technique of converting the underlying original partially-observed stochastic control problem to a fully observed one on the belief space and the principle of dynamic programming. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call