Abstract

It is widely understood that the performance of the nearest neighbor (NN) rule is dependent on: (i) the way distances are computed between different examples, and (ii) the type of feature representation used. Linear filters are often used in computer vision as a pre-processing step, to extract useful feature representations. In this paper we demonstrate an equivalence between (i) and (ii) for NN tasks involving weighted Euclidean distances. Specifically, we demonstrate how the application of a bank of linear filters can be re-interpreted, in the form of a symmetric weighting matrix, as a manipulation of how distances are computed between different examples for NN classification. Further, we argue that filters fulfill the role of encoding local spatial constraints into this weighting matrix. We then demonstrate how these constraints can dramatically increase the generalization capability of canonical distance metric learning techniques in the presence of unseen illumination and viewpoint change.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.