Abstract

Kernel-based clustering algorithms can identify and capture the nonlinear structure in datasets, thereby achieving better performance than linear clustering. However, the large amount of memory involved in computing and storing the entire kernel matrix makes it difficult for kernel-based clustering to handle large-scale datasets. In this study, we prove that an incomplete Cholesky factorization (InCF) can generate an approximate matrix with rank s for a kernel matrix after s iterations. We also show that the approximation error decreases exponentially as the number of iterations increases when the exponential decay of the kernel matrix eigenvalues is sufficiently fast. We therefore employ InCF to accelerate kernel clustering and save memory space. The key idea of the proposed kernel k-means clustering using InCF is that the entire kernel matrix is approximated as the product of a low-rank matrix and its transpose. Then, linear k-means clustering is applied to the columns of the transpose of the low-rank matrix. We prove that the clustering error achieved by this method decreases exponentially as the rank of the approximate matrix increases. Experimental results show that the performance of the proposed algorithm is similar to that of the kernel k-means clustering algorithm, but the method can be applied to large-scale datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call