Abstract

In recent years, functional linear models have attracted growing attention in statistics and machine learning for recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in a reproducing kernel Hilbert space. It provides convergence analysis of excess prediction error and estimation error with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. Introducing an explicit regularization term extends the saturation boundary of unregularized online learning algorithms with polynomially decaying step-size and achieves fast convergence rates of estimation error without capacity assumption. In contrast, the latter remains an open problem for the unregularized online learning algorithm with decaying step-size. This paper also demonstrates competitive convergence rates of both prediction error and estimation error with constant step-size compared to existing literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call