Abstract

To properly align objects in the real and virtual worlds in an augmented reality (AR) space it is essential to keep tracking the camera's exact three-dimensional position and orientation (camera pose). State-of-the-art analysis shows that traditional vision-based or inertial sensor-based solutions are not adequate when used individually. Sensor fusion for hybrid tracking has become an active research direction during the past few years, although how to do it in a robust and principled way is still an open problem. In this paper, we develop a hybrid camera pose-tracking system that combines vision and inertial sensor technologies. We propose to use the particle filter framework for the sensor fusion system. Particle filters are sequential Monte-Carlo methods based upon a point mass (or 'particle') representation of probability densities, which can be applied to any state space model and which generalize the traditional Kalman filtering methods. We have tested our algorithm to evaluate its performance and have compared the results obtained by the particle filter with those given by a classical extended Kalman filter. Experimental results are presented

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call