In classical frameworks as the Euclidean space, positive definite kernels as well as their analytic properties are explicitly available and can be incorporated directly in kernel-based learning algorithms. This is different if the underlying domain is a discrete irregular graph. In this case, respective kernels have to be computed in a preliminary step in order to apply them inside a kernel machine. Typically, such a kernel is given as a matrix function of the graph Laplacian. Its direct calculation leads to a high computational burden if the size of the graph is very large. In this work, we investigate five different block Krylov subspace methods to obtain cheaper iterative approximations of these kernels. We will investigate convergence properties of these Krylov subspace methods and study to what extent these methods are able to preserve the symmetry and positive definiteness of the original kernels they are approximating. We will further discuss the computational complexity and the memory requirements of these methods, as well as possible implications for the kernel predictors in machine learning.
Read full abstract