Abstract

At present, multiple kernel subspace clustering technology has been extensively used in various fields of machine learning and computer vision. However, the traditional multiple kernel subspace clustering method still has two problems. On the one hand, the combination of kernel functions is uncertain. Choosing inappropriate kernel functions from different sources or data with different characteristics will make the results inaccurate. On the other hand, data sources are extensive and subspace division is not accurate. In this paper, we unify coefficient discriminant information and multiple kernel subspace clustering into one process. Firstly, we combine the consensus Hilbert space to propose a consensus kernel method. This method reconstructs a set of different kernel matrices into kernel matrices more in line with the data characteristics, which can effectively solve the uncertainty of the combination of kernel functions. Secondly, the diagonal structure of blocks is constructed by subspace clustering, which can effectively solve the problems of different data sources. In the process, we propose a coefficient discriminant information which can constrain the data in the coefficient space to group as many of the same types of data as possible, and separate different types of data as much as possible. Finally, we propose a new iterative optimization method based on the Alternating Direction Multiplier Method (ADMM) to solve the final optimization goal. Our experiment is conducted on twenty different data sets. According to the final experimental results, our proposed algorithm has higher ACC, NMI and ARI than the latest clustering methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call