Abstract

The problem of localizing a set of nodes from relative pairwise measurements appears in different fields such as computer vision, sensor networks, and robotics. In practice, the measurements might be contaminated by noise and outliers that lead to erroneous localization. Previous work has empirically shown that robust algorithms can, in some situations, almost completely cancel the effect of outliers. However, there is a theoretical gap in answering the following question: under what conditions on the number, magnitude, and arrangement of the outlier measurements can we guarantee that a robust algorithm will recover the ground truth locations from the relative measurements alone? We denote this concept as <i>verifiability</i>, and answer the question for the case of an <inline-formula><tex-math notation="LaTeX">$\ell _{1}$</tex-math></inline-formula>-norm robust optimization formulation, with translation measurements that are affected only by large-magnitude outliers and no small-magnitude noise. We prove that verifiability depends only on the topology of the graph, the location of the edges affected by the outliers, and the sign of the outliers, while it is independent of the (a priori unknown) true location of the nodes, and the magnitude of the outliers. We present an algorithm based on the dual simplex algorithm that checks the verifiability of a problem, and, if not verifiable, completely characterizes the space of equivalent solutions that are consistent with the given pairwise measurements. Our theory and algorithms can be used to compute the <i>a priori</i> probability of recovering a solution congruent or equivalent to the ground truth, without having access to the true locations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call