Abstract

Kernel learning approaches have recently attracted much attention to improve the performance of clustering tasks. Although extensive studies have attempted to suppress the adverse influence of less confident samples, they cannot sufficiently take into account the different importance of samples in kernel learning. To alleviate this issue, our proposed algorithm utilizes a half-quadratic (HQ) paradigm to automatically determine the reliability of samples and to adaptively capture neighborhood relation information according to sample importance. We model an effective parameter-free kernel exploration by exploiting both global and HQ-based local data relationships. This paper designs an effective Alternative Direction Method of Multiplier (ADMM)-based algorithm to convert the fourth-order objective function to the second-order one, and offers an efficient closed-form solution for a simplex optimization problem based on the closest point theorem. Moreover, we prove the convergence of our proposed algorithm in theoretical and experimental fashions. Comprehensive experiments on benchmark and real-world datasets demonstrate that on average, the presented framework against competitors can improve clustering performance in terms of accuracy, NMI, and purity by about 5%–22%, verifying the efficacy and superiority of our proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.