Abstract
The path-integral control, which stems from the stochastic Hamilton-Jacobi-Bellman equation, is one of the methods to control stochastic nonlinear systems. This paper gives a new insight into nonlinear stochastic optimal control problems from the perspective of Koopman operators. When a finite-dimensional dynamical system is nonlinear, the corresponding Koopman operator is linear. Although the Koopman operator is infinite-dimensional, adequate approximation makes it tractable and useful in some discussions and applications. Employing the Koopman operator perspective, it is clarified that only a specific type of observable is enough to be focused on in the control problem. This fact becomes easier to understand via path-integral control. Furthermore, the focus on the specific observable leads to a natural power-series expansion; coupled ordinary differential equations for discrete-state space systems are derived. A demonstration for nonlinear stochastic optimal control shows that the derived equations work well.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Journal of the Physical Society of Japan
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.