In this paper we study the relation between convergence rates of spectral regularization methods under Hölder-type source conditions resulting from the theory of ill-posed inverse problems, when the noise level $\delta$ goes to 0, and convergence rates resulting from statistical kernel learning, when the number of samples n goes to infinity. Toward this aim, we introduce a family of hybrid estimators in the statistical learning context whose convergence rates have the following properties: first, they are equal to those of spectral methods, and second, they are connected to the rates of spectral regularization in ill-posed inverse problems, provided that a suitable inverse proportionality relation between n and $\delta$ holds true. This family of estimators allows us to convert upper rates depending on $n$ to upper rates depending on $\delta$ and to convert lower rates vice versa, quantifying their deviation. The analysis is carried out under general source conditions in the case the rank of the forward operator is both finite and infinite, and, in the latter case, both by not making any assumptions on the eigenvalues and by assuming a polynomial eigenvalue decay.
Read full abstract