In this paper, we consider the problem of predicting observations generated online by an unknown, partially observable linear system, which is driven by Gaussian noise. In the linear Gaussian setting, the optimal predictor in the mean square error sense is the celebrated Kalman filter, which can be explicitly computed when the system model is known. When the system model is unknown, we have to learn how to predict observations online based on finite data, suffering possibly a non-zero regret with respect to the Kalman filter's prediction. We show that it is possible to achieve a regret of the order of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{poly}\log (\mathsf {N})$</tex-math></inline-formula> with high probability, where <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$N$</tex-math></inline-formula> is the number of observations collected. This is achieved using an online least-squares algorithm, which exploits the approximately linear relation between future observations and past observations. The regret analysis is based on the stability properties of the Kalman filter, recent statistical tools for finite sample analysis of system identification, and classical results for the analysis of least-squares algorithms for time series. Our regret analysis can also be applied to other predictors, e.g. multiple step-ahead prediction, or prediction under exogenous inputs including closed-loop prediction. A fundamental technical contribution is that our bounds hold even for the class of non-explosive systems (including marginally stable systems), which was not addressed before in the case of online prediction.
Read full abstract