The path-integral control, which stems from the stochastic Hamilton-Jacobi-Bellman equation, is one of the methods to control stochastic nonlinear systems. This paper gives a new insight into nonlinear stochastic optimal control problems from the perspective of Koopman operators. When a finite-dimensional dynamical system is nonlinear, the corresponding Koopman operator is linear. Although the Koopman operator is infinite-dimensional, adequate approximation makes it tractable and useful in some discussions and applications. Employing the Koopman operator perspective, it is clarified that only a specific type of observable is enough to be focused on in the control problem. This fact becomes easier to understand via path-integral control. Furthermore, the focus on the specific observable leads to a natural power-series expansion; coupled ordinary differential equations for discrete-state space systems are derived. A demonstration for nonlinear stochastic optimal control shows that the derived equations work well.
Read full abstract