Abstract

Clustering in high-dimensional spaces is a difficult problem which is recurrent in many domains, for example in image analysis. The difficulty is due to the fact that high-dimensional data usually exist in different low-dimensional subspaces hidden in the original space. A family of Gaussian mixture models designed for high-dimensional data which combine the ideas of subspace clustering and parsimonious modeling are presented. These models give rise to a clustering method based on the expectation–maximization algorithm which is called high-dimensional data clustering (HDDC). In order to correctly fit the data, HDDC estimates the specific subspace and the intrinsic dimension of each group. Experiments on artificial and real data sets show that HDDC outperforms existing methods for clustering high-dimensional data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call