Abstract

We introduce a new algorithm for distance metric learning which uses pairwise similarity (equivalence) and dissimilarity constraints. The method is adapted to the high-dimensional feature spaces that occur in many computer vision applications. It first projects the data onto the subspace orthogonal to the linear span of the difference vectors of similar sample pairs. Similar samples thus have identical projections, i.e., the distance between the two elements of each similar sample pair becomes zero in the projected space. In the projected space we find a linear embedding that maximizes the scatter of the dissimilar sample pairs. This corresponds to a pseudo-metric characterized by a positive semi-definite matrix in the original input space. We also kernelize the method and show that this allows it to handle cases with low-dimensional input spaces and large numbers of similarity constraints. Despite the method's simplicity, experiments on synthetic problems and on real-world image retrieval, visual object classification, gender classification and image segmentation ones demonstrate its effectiveness, yielding significant improvements over the existing distance metric learning methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.