Abstract

Clustering is a widely used unsupervised learning technique that groups data into homogeneous clusters. However, when dealing with real-world data that contain categorical values, existing algorithms can be computationally costly in high dimensions and can struggle with noisy data that has missing values. Furthermore, except for one algorithm, no others provide theoretical guarantees of clustering accuracy. In this article, we propose a general categorical data encoding method and a computationally efficient spectral-based algorithm to cluster high-dimensional noisy categorical data (nominal or ordinal). Under a statistical model for data on m attributes from n subjects in r clusters with missing probability ϵ, we show that our algorithm exactly recovers the true clusters with high probability when m n ( 1 − ϵ ) ≥ C M r 2 log 3 M , with M = max ( n , m ) and a fixed constant C. In addition, we show that m n ( 1 − ϵ ) 2 ≥ r δ / 2 with 0 < δ < 1 is necessary for any algorithm to succeed with probability at least ( 1 + δ ) / 2 . In cases where m = n and r are fixed, the sufficient condition matches with the necessary condition up to a polylog ( n ) factor. In numerical studies our algorithm outperforms several existing algorithms in both clustering accuracy and computational efficiency. Supplementary materials for this article are available online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call