Abstract

Kernel methods are a class of machine learning algorithms which learn and discover patterns in a high (possibly infinite) dimensional feature space obtained by often nonlinear, possibly infinite mapping of an input space. A major problem with kernel methods is their time complexity. For a data set with n input points a time complexity of a kernel method is O(n3), which is intractable for a large data set. A method based on a random Nystr?m features is an approximation method that is able to reduce the time complexity to O(np2+p3) where p is the number of randomly selected input data points. A time complexity of O(p3) comes from the fact that a spectral decomposition needs to be performed on a p x p Gram matrix, and if p is a large number even an approximate algorithm is time consuming. In this paper we will apply the randomized SVD method instead of the spectral decomposition and further reduce the time complexity. An input parameters of a randomized SVD algorithm are p x p Gram matrix and a number m < p. In this case time complexity is O(nm2+p2m+m3), and linear regression is performed on a m-dimensional random features. We will prove that the error of a predictor, learned via this method is almost the same in expectation as the error of a kernel predictor. Aditionally, we will empirically show that this predictor is better than the ONE that uses only Nystr?m method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call