Abstract

Contrastive learning is a powerful technique for learning feature representations without manual annotation. The K-nearest neighbor (KNN) method is commonly used to construct positive sample pairs to calculate the contrastive loss. However, it is challenging to distinguish positive sample pairs, reducing clustering performance. We propose a novel Deep Contrastive Clustering method based on a GrapH convolutional network called GHDCC. It uses an instance-level contrastive loss with mean square error (MSE) regularization and a cluster-level contrastive loss to incorporate semantic features and perform cluster assignments. The method utilizes a graph convolutional network (GCN) to improve the semantic consistency of features and linear interpolation data augmentation to improve the representation ability of the model. To minimize the occurrence of false positive sample pairs, we select only samples whose similarity exceeds a predefined threshold to construct the adjacency matrix. The experimental results on six public datasets demonstrate that the GHDCC significantly outperforms contrastive clustering (CC, 500) by a large margin except on CIFAR-10. The GHDCC performs well compared to other deep contrastive clustering methods and achieves the highest clustering accuracy of 0.913 on ImageNet-10.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.