Abstract

Despite the great success, existing regression clustering methods based on shallow models are vulnerable due to: (1) They often pay no attention to the combination between learning representations and clustering, thus resulting in unsatisfactory clustering performance. (2) They ignore the relationship of data distribution and target distribution such that those methods are noise and illumination-change sensitive. (3) These nonlinear regression methods usually impose the hard constraint to minimize the mismatch between the discrete cluster assignment matrix and latent representations, which leads to over-fitting. In this paper, we utilize deep adversarial regression to tackle these problems and formulate regression based clustering by deep adversarial learning (RCDA). By seamlessly combining with the stacked autoencoder, the proposed model integrates learning deep nonlinear latent representation and clustering in a unified framework. Specifically, RCDA uses a kind of relax constraint between latent representations and continuous cluster assignment matrix to avoid over-fitting, and simultaneously utilizes the t-SNE algorithm and adversarial learning to analyze data distribution and target distribution so that improve representations learning. Experimental results on public benchmark datasets demonstrate that the proposed architecture achieves better performance than state-of-the-art clustering models in image clustering task.

Highlights

  • C Lustering, primitive exploration with little or no prior knowledge, is one of the most indispensable and fundamental research topics in artificial intelligence research, and applies in many fields such as image retrieval, image annotation, document analysis and image segmentation, etc

  • (3) These nonlinear regression methods usually impose the hard constraint to minimize the mismatch between the discrete cluster assignment matrix and latent representations, which leads to over-fitting

  • regression based clustering by deep adversarial learning (RCDA) uses a kind of relax constraint between latent representations and continuous cluster assignment matrix to avoid over-fitting, and simultaneously utilizes the t-SNE algorithm and adversarial learning to analyze data distribution and target distribution so that improve representations learning

Read more

Summary

Introduction

C Lustering, primitive exploration with little or no prior knowledge, is one of the most indispensable and fundamental research topics in artificial intelligence research, and applies in many fields such as image retrieval, image annotation, document analysis and image segmentation, etc. To deal with the problem of dimensional curse, a common way is to transform data from a high dimensional data space to a lower feature space by applying hand-crafted feature extraction or dimension reduction techniques like principle component analysis (PCA), scale invariant feature transform (SIFT feature) and histogram of oriented gradients (HOG feature). Clustering can be performed in the lower dimensional feature space. These hand-crafted features ignore the interconnection between features learning and clustering. To address this issue, Torre et al [6] propose a shallow model to perform clustering and feature learning simultaneously by integrating K-Means and linear discriminant analysis (LDA) into a joint framework. The representation ability of features learned via these shallow models is limited

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.