Abstract

We consider the problem of optimal control of a mean-field stochastic differential equation (SDE) under model uncertainty. The model uncertainty is represented by ambiguity about the law of the state X(t) at time t. For example, it could be the law of X(t) with respect to the given, underlying probability measure . This is the classical case when there is no model uncertainty. But it could also be the law with respect to some other probability measure or, more generally, any random measure on with total mass 1. We represent this model uncertainty control problem as a stochastic differential game of a mean-field related type SDE with two players. The control of one of the players, representing the uncertainty of the law of the state, is a measure-valued stochastic process and the control of the other player is a classical real-valued stochastic process u(t). This optimal control problem with respect to random probability processes in a non-Markovian setting is a new type of stochastic control problems that has not been studied before. By constructing a new Hilbert space of measures, we obtain a sufficient and a necessary maximum principles for Nash equilibria for such games in the general nonzero-sum case, and for saddle points in zero-sum games. As an application we find an explicit solution of the problem of optimal consumption under model uncertainty of a cash flow described by a mean-field related type SDE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call