Abstract

In this work, we consider the problem of learning the Koopman operator for discrete-time autonomous systems. The learning problem is generally formulated as a constrained regularized empirical loss minimization in the infinite-dimensional space of linear operators. We show that a representer theorem holds for the learning problem under certain but general conditions. This allows convex reformulation of the problem in a finite-dimensional space without any approximation and loss of precision. We discuss the inclusion of various forms of regularization and constraints in the learning problem, such as the operator norm, the Frobenius norm, the operator rank, the nuclear norm, and the stability. Subsequently, we derive the corresponding equivalent finite-dimensional problem. Furthermore, we demonstrate the connection between the proposed formulation and the extended dynamic mode decomposition. We present several numerical examples to illustrate the theoretical results and verify the performance of regularized learning of the Koopman operators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call