Abstract
In this paper, we study the extended mean field control problem, which is a class of McKean–Vlasov stochastic control problem where the state dynamics and the reward functions depend upon the joint (conditional) distribution of the controlled state and the control process. By considering an appropriate controlled Fokker–Planck equation, we can formulate an optimization problem over a space of measure–valued processes and, under suitable assumptions, prove the equivalence between this optimization problem and the extended mean–field control problem. Moreover, with the help of this new optimization problem, we establish the associated limit theory i.e. the extended mean field control problem is the limit of a large population control problem where the interactions are achieved via the empirical distribution of state and control processes.
Highlights
The aim of this paper is to provide a rigorous connection between two stochastic control problems: the stochastic control problem of large population interacting through the empirical distribution of their states and controls on the one hand, and the other hand the problem of control of stochastic dynamics depending upon the joint distribution of the controlled state and the control, called extended mean field control problem
To bypass the difficulty generated by the distribution of control in this study, especially to prove the limit theory result or propagation of chaos, we introduce a new stochastic control problem
Motivated by the Fokker–Planck equation verified by the couple from (2.4), we give in this part an equivalent formulation of the extended mean field control problem which is less “rigid”
Summary
The aim of this paper is to provide a rigorous connection between two stochastic control problems: the stochastic control problem of large population (or particles) interacting through the empirical distribution of their states and controls on the one hand, and the other hand the problem of control of stochastic dynamics depending upon the joint (conditional) distribution of the controlled state and the control, called extended mean field control problem. The idea of using relaxed controls, i.e. control seen as probability measure of type δαt (du)dt helps to find some compactness properties necessary for proving these types of results Following upon these ideas, [11] develops a general overview of McKean–Vlasov or mean field control problem, and treats the case with common noise, which turns out to be a non trivial extension. The classical idea is to put this application in a canonical space, which is here the space C([0, T ]; P(Rn)) of continuous functions from [0, T ] into the space of probability measures on Rn, and via compactness arguments and martingale problem get this connection (see [24], and [11] for the non–Markovian case with common noise) In our situation, this type of continuity is lost because we must take into account the application t → L(Xt, αt|B) (or t → φNt ) which does not have this property since the presence of control α can generate some discontinuities. For any q ∈ M(E), we define qt∧·(ds, de) := q(ds, de) [0,t]×E + δe0 (de)ds
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.