Abstract
Clustering aims to make data points in the same group have higher similarity or make data points in different groups have lower similarity. Therefore, we propose three novel fast clustering models motivated by maximizing within-class similarity, which can obtain more instinct clustering structure of data. Different from traditional clustering methods, we divide all n samples into m classes by the pseudo label propagation algorithm first, and then m classes are merged to c classes ( ) by the proposed three co-clustering models, where c is the real number of categories. On the one hand, dividing all samples into more subclasses first can preserve more local information. On the other hand, proposed three co-clustering models are motivated by the thought of maximizing the sum of within-class similarity, which can utilize the dual information between rows and columns. Besides, the proposed pseudo label propagation algorithm can be a new method to construct anchor graphs with linear time complexity. A series of experiments are conducted on both synthetic and real-world datasets and the experimental results show the superior performance of three models. It is worth noting that for the proposed models, FMAWS2 is the generalization of FMAWS1 and FMAWS3 is the generalization of other two.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.