Abstract
To properly align objects in the real and virtual worlds in an augmented reality (AR) space it is essential to keep tracking the camera's exact three-dimensional position and orientation (camera pose). State-of-the-art analysis shows that traditional vision-based or inertial sensor-based solutions are not adequate when used individually. Sensor fusion for hybrid tracking has become an active research direction during the past few years, although how to do it in a robust and principled way is still an open problem. In this paper, we develop a hybrid camera pose-tracking system that combines vision and inertial sensor technologies. We propose to use the particle filter framework for the sensor fusion system. Particle filters are sequential Monte-Carlo methods based upon a point mass (or 'particle') representation of probability densities, which can be applied to any state space model and which generalize the traditional Kalman filtering methods. We have tested our algorithm to evaluate its performance and have compared the results obtained by the particle filter with those given by a classical extended Kalman filter. Experimental results are presented
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.