Abstract

Fast covariance calculation is required both for simultaneous localization and mapping (SLAM; e.g., in order to solve data association) and for evaluating the information-theoretic term for different candidate actions in belief space planning (BSP). In this article, we make two primary contributions. First, we develop a novel general-purpose incremental covariance update technique, which efficiently recovers specific covariance entries after any change in probabilistic inference, such as the introduction of new observations/variables or relinearization. Our approach is shown to recover them faster than other state-of-the-art methods. Second, we present a computationally efficient approach for BSP in high-dimensional state spaces, leveraging our incremental covariance update method. State-of-the-art BSP approaches perform belief propagation for each candidate action and then evaluate an objective function that typically includes an information-theoretic term, such as entropy or information gain. Yet, candidate actions often have similar parts (e.g., common trajectory parts), which are however evaluated separately for each candidate. Moreover, calculating the information-theoretic term involves a costly determinant computation of the entire information (covariance) matrix, which is with being dimension of the state or costly Schur complement operations if only marginal posterior covariance of certain variables is of interest. Our approach, rAMDL-Tree, extends our previous BSP method rAMDL, by exploiting incremental covariance calculation and performing calculation reuse between common parts of non-myopic candidate actions, such that these parts are evaluated only once, in contrast to existing approaches. To that end, we represent all candidate actions together in a single unified graphical model, which we introduce and call a factor-graph propagation (FGP) action tree. Each arrow (edge) of the FGP action tree represents a sub-action of one (or more) candidate action sequence(s) and in order to evaluate its information impact we require specific covariance entries of an intermediate belief represented by the tree’s vertex from which the edge is coming out (e.g., tail of the arrow). Overall, our approach has only a one-time calculation that depends on , while evaluating action impact does not depend on . We perform a careful examination of our approaches in simulation, considering the problem of autonomous navigation in unknown environments, where rAMDL-Tree shows superior performance compared with rAMDL, while determining the same best actions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call