Abstract

In this article, we study the problem of joint localization and target tracking using a mobile robot network. Here, a team of mobile robots equipped with onboard sensors simultaneously localize themselves and track multiple targets. We introduce a fully distributed algorithm that is applicable to generic robot motion, target process, and measurement models and is robust to time-varying sensing and communication topologies and changing blind robots (the robots not directly sensing the targets). Instead of treating localization and target tracking as two separate problems, we explicitly account for the influence of one to the other and exploit it to improve performance in a fully distributed context. Two novel kinds of distributed estimates are derived. By employing them, each robot can estimate the pose (position and orientation) of itself (localization) and the states of targets (tracking) using only its local information and information from its one-hop communicating neighbors while preserving consistency. Furthermore, it is proven that, in the case of linear time-varying models, the estimation errors are bounded in the mean-square sense under very mild conditions on the sensing and communication graph and system observability. The effectiveness of our approach is demonstrated extensively through Monte Carlo simulations, and experiments carried out using a real-world data set. It is also shown better performance in the pose estimates of the robots is achieved when jointly estimating the robots' poses and targets' states.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call