Abstract

In mobile visual sensor networks, relative pose (location and orientation) estimation is a prerequisite to accomplish a wide range of collaborative tasks. In this paper, we present a distributed, peer-to-peer algorithm for relative pose estimation in a network of mobile robots equipped with RGB-D cameras acting as a visual sensor network. Our algorithm uses the depth information to estimate the relative pose of a robot when camera sensors mounted on different robots observe a common scene from different angles of view. To create the algorithm, we first developed a framework based on the beam-based sensor model to eliminate the adverse effects of the situations where two views of a scene each are partially seen by the sensors. Then, in order to cancel the bias introduced by the beam-based sensor model, we developed a scheme that allows the algorithm to symmetrize across the two views. We conducted simulations and also implemented the algorithm on our mobile visual sensor network testbed. Both the simulations and experimental results indicate that the proposed algorithm is fast enough for real-time operation and able to maintain a high estimation accuracy. To our knowledge, it is the first distributed relative pose estimation algorithm that uses the depth information captured by multiple RGB-D cameras.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call