Abstract

In this paper we formulate a time-optimal control problem in the space of probability measures. The main motivation is to face situations in finite-dimensional control systems evolving deterministically where the initial position of the controlled particle is not exactly known, but can be expressed by a probability measure on \(\mathbb {R}^{d}\). We propose for this problem a generalized version of some concepts from classical control theory in finite dimensional systems (namely, target set, dynamic, minimum time function...) and formulate an Hamilton-Jacobi-Bellman equation in the space of probability measures solved by the generalized minimum time function, by extending a concept of approximate viscosity sub/superdifferential in the space of probability measures, originally introduced by Cardaliaguet-Quincampoix in Cardaliaguet and Quincampoix (Int. Game Theor. Rev. 10, 1–16, 2008). We prove also some representation results linking the classical concept to the corresponding generalized ones. The main tool used is a superposition principle, proved by Ambrosio, Gigli and Savare in Ambrosio et al. [3], which provides a probabilistic representation of the solution of the continuity equation as a weighted superposition of absolutely continuous solutions of the characteristic system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.