Abstract

In this paper, we provide a mathematical foundation for the least square regression learning with indefinite kernel and coefficient regularization. Except for continuity and boundedness, the kernel function is not necessary to satisfy any further regularity conditions. An explicit expression of the solution via sampling operator and empirical integral operator is derived and plays an important role in our analysis. It provides a natural error decomposition where the approximation error is characterized by a reproducing kernel Hilbert space associated to certain Mercer kernel. A careful analysis shows the sample error has O ( 1 m ) decay. We deduce the error bound and prove the asymptotic convergence. Satisfactory learning rates are then derived under a very mild regularity condition on the regression function. When the kernel is itself a Mercer kernel better rates are given by a rigorous analysis which shows coefficient regularization is powerful in learning smooth functions. The saturation effect and the relation to the spectral algorithms are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call