Abstract
Distance metric plays an important role in many machine learning algorithms. Recently, there has been growing interest in distance metric learning for semi-supervised setting. In the last few years, many methods have been proposed for metric learning when pairwise similarity (must-link) and/or dissimilarity (cannot-link) constraints are available along with unlabeled data. Most of these methods learn a global Mahalanobis metric (or equivalently, a linear transformation). Although some recently introduced methods have devised nonlinear extensions of linear metric learning methods, they usually allow only limited forms of distance metrics and also can use only similarity constraints. In this paper, we propose a nonlinear metric learning method that learns a completely flexible distance metric via learning a nonparametric kernel matrix. The proposed method uses both similarity and dissimilarity constraints and also the topological structure of the data to learn an appropriate distance metric. Our method is formulated as a convex optimization problem for learning a kernel matrix. This convex problem allows us to give a local-optimum-free metric learning method. Experimental results on synthetic and real-world data sets show that the proposed method outperforms the recently introduced metric learning methods for semi-supervised clustering.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.