Abstract

We provide a data-driven framework for optimal control of a continuous-time control-affine stochastic dynamical system. The proposed framework relies on the linear operator theory involving Perron-Frobenius (P-F) and Koopman operators. Our first result involving the P-F operator provides a convex formulation to the optimal control problem in the dual space of densities. This convex formulation of the stochastic optimal control problem leads to an infinite-dimensional convex program. The finite-dimensional approximation of the convex program is obtained using a data-driven approximation of the linear operators. We provide rigorous error bounds and convergence rate for the data-driven approximation of the linear operators and comment on the convergence rate of the optimal control w.r.t. data size. Our second results demonstrate using the Koopman operator, dual to the P-F operator, for the stochastic optimal control design. We show that the Hamilton Jacobi Bellman (HJB) equation can be expressed using the Koopman operator. We provide an iterative procedure along the lines of a popular policy iteration algorithm based on the data-driven approximation of the Koopman operator for solving the HJB equation. The two formulations, namely the convex formulation involving the P-F operator and the Koopman-based formulation using the HJB equation, can be viewed as dual to each other where the duality follows due to the dual nature of the P-F and Koopman operators. Finally, we present examples to demonstrate the efficacy of the developed framework and verify the convergence rates for the operator and optimal control numerically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call