Abstract

In this paper we focus on a general type of mean-field stochastic control problem with partial observation, in which the coefficients depend in a non-linear way not only on the state process Xt and its control ut but also on the conditional law E[Xt|FtY] of the state process conditioned with respect to the past of observation process Y. We first deduce the well-posedness of the controlled system by showing weak existence and uniqueness in law. Neither supposing convexity of the control state space nor differentiability of the coefficients with respect to the control variable, we study Peng’s stochastic maximum principle for our control problem. The novelty and the difficulty of our work stem from the fact that, given an admissible control u, the solution of the associated control problem is only a weak one. This has as consequence that also the probability measure in the solution Pu=LTuQ depends on u and has a density LTu with respect to a reference measure Q. So characterizing an optimal control leads to the differentiation of non-linear functions f(Pu∘{EPu[Xt|FtY]}−1) with respect to (LTu,Xt). This has as consequence for the study of Peng’s maximum principle that we get a new type of first and second order variational equations and adjoint backward stochastic differential equations, all with new mean-field terms and with coefficients which are not Lipschitz. For their estimates and for those for the Taylor expansion new techniques have had to be introduced and rather technical results have had to be established. The necessary optimality condition we get extends Peng’s one with new, non-trivial terms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call