Abstract
Kernel methods are popular in nonlinear and nonparametric regression due to their solid mathematical foundations and optimal statistical properties. However, scalability remains the primary bottleneck in applying kernel methods to large-scale data regression analysis. This paper aims to improve the scalability of kernel methods. We combine Nyström subsampling and the preconditioned conjugate gradient method to solve regularized kernel regression. Our theoretical analysis indicates that achieving optimal convergence rates requires only [Formula: see text] memory and [Formula: see text] time (up to logarithmic factors). Numerical experiments show that our algorithm outperforms existing methods in time efficiency and prediction accuracy on large-scale datasets. Notably, compared to the FALKON algorithm [A. Rudi, L. Carratino and L. Rosasco, Falkon: An optimal large scale kernel method, in Advances in Neural Information Processing Systems (Curran Associates, 2017), pp. 3891–3901], which is known as the optimal large-scale kernel method, our method is more flexible (applicable to non-positive definite kernel functions) and has a lower algorithmic complexity. Additionally, our established theoretical analysis further relaxes the restrictive conditions on hyperparameters previously imposed in convergence analyses.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.