Abstract
Due to the rapid development of modern multimedia techniques, high-dimensional image data are frequently encountered in many image analysis communities, such as clustering and feature learning. K-means (KM) is one of the widely-used and efficient tools for clustering high-dimensional data. However, as the commonly contained irrelevant features or noise, conventional KMs suffer from degraded performance for high-dimensional data. Recent studies try to overcome this problem by combining KMs with subspace learning. Nevertheless, they usually depend on complex eigenvalue decomposition, which needs expensive computation resources. Besides, their clustering models also ignore the local manifold structure among data, failing to utilize the underlying adjacent information. Two points are critical for clustering high-dimensional image data: efficient feature selecting and clear adjacency exploring. Based on the above consideration, we propose an auto-adjoined subspace clustering. Concretely, to efficiently locate the redundant features, we impose an extremely sparse feature selection matrix into KM, which is easy to be optimized. Besides, to accurately encode the local adjacency among data without the influence of noise, we propose to automatically assign the connectivity of each sample in the low-dimensional feature space. Compared with several state-of-the-art clustering methods, the proposed method constantly improves the clustering performance on six publicly available benchmark image datasets, demonstrating the effectiveness of our method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have