Abstract

In this paper, we study the problem of how to reliably compute neighborhoods on affinity graphs. The k-nearest neighbors (kNN) is one of the most fundamental and simple methods widely used in many tasks, such as classification and graph construction. Previous research focused on how to efficiently compute kNN on vectorial data. However, most real-world data have no vectorial representations, and only have affinity graphs which may contain unreliable affinities. Since the kNN of an object o is a set of k objects with the highest affinities to o, it is easily disturbed by errors in pairwise affinities between o and other objects, and also it cannot well preserve the structure underlying the data. To reliably analyze the neighborhood on affinity graphs, we define the k-dense neighborhood (kDN), which considers all pairwise affinities within the neighborhood, i.e., not only the affinities between o and its neighbors but also between the neighbors. For an object o, its kDN is a set kDN(o) of k objects which maximizes the sum of all pairwise affinities of objects in the set {o}?kDN(o). We analyze the properties of kDN, and propose an efficient algorithm to compute it. Both theoretic analysis and experimental results on shape retrieval, semi-supervised learning, point set matching and data clustering show that kDN significantly outperforms kNN on affinity graphs, especially when many pairwise affinities are unreliable.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.