Abstract

Spectral clustering techniques are valuable tools in signal processing and machine learning for partitioning complex data sets. The effectiveness of spectral clustering stems from constructing a non-linear embedding based on creating a similarity graph and computing the spectral decomposition of the Laplacian matrix. However, spectral clustering methods fail to scale to large data sets because of high computational cost and memory usage. A popular approach for addressing these problems utilizes the Nystrom method, an efficient sampling-based algorithm for computing low-rank approximations to large positive semi-definite matrices. This paper demonstrates how the previously popular approach of Nystrom-based spectral clustering has severe limitations. Existing time-efficient methods ignore critical information by prematurely reducing the rank of the similarity matrix associated with sampled points. Also, current understanding is limited regarding how utilizing the Nystrom approximation will affect the quality of spectral embedding approximations. To address the limitations, this work presents a principled spectral clustering algorithm that exploits spectral properties of the similarity matrix associated with sampled points to regulate accuracy-efficiency trade-offs. We provide theoretical results to reduce the current gap and present numerical experiments with real and synthetic data. Empirical results demonstrate the efficacy and efficiency of the proposed method compared to existing spectral clustering techniques based on the Nystrom method and other efficient methods. The overarching goal of this work is to provide an improved baseline for future research directions to accelerate spectral clustering.

Highlights

  • Cluster analysis is a fundamental problem in signal processing and exploratory data analysis that divides a data set into several groups using the information found only in the data

  • This paper presents a systematic treatment of utilizing the Nystrom method for improving the accuracy and scalability of approximate spectral clustering

  • Matlab built-in functions are used for computing standard matrix factorizations, including singular value decomposition (SVD) and eigenvalue decomposition (EVD)

Read more

Summary

Introduction

Cluster analysis is a fundamental problem in signal processing and exploratory data analysis that divides a data set into several groups using the information found only in the data. Among several techniques [1], [2], spectral clustering [3], [4] is one of the most prominent and successful methods to capture complex structures, such as non-spherical clusters. In these scenarios, spectral clustering outperforms popular Euclidean clustering techniques, such as K-means clustering [5], [6]. Spectral clustering expresses data clustering as a graph partitioning problem by constructing an undirected similarity graph with each point in the data set being a node. The first step of spectral clustering involves forming a positive semi-definite kernel matrix K ∈ Rn×n with the (i, j)-th entry [K]ij = κ(xi, xj), which describes similarities among n input data points. The quadratic complexity in the number of input data points renders spectral clustering intractable for large data sets

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.