Abstract
We present a new visual-inertial tracking device for augmented and virtual reality applications. The paper addresses two fundamental issues of such systems. The first one concerns the definition and modelling of the sensor fusion. Much work has been done in this area and several models for exploiting the data of the gyroscopes and linear accelerometers have been proposed. However, the respective advantages of each model and in particular the benefits of the integration of the accelerometer data in the filter are still unclear. The paper therefore provides an evaluation of different models with special investigation of the effects of using accelerometers on the tracking performance. The second contribution is about the development of an image processing approach that does not require special landmarks but uses natural features. Our solution relies on a 3D model of the scene that enables to predict the appearances of the features by rendering the model using the prediction data of the sensor fusion filter. The feature localisation is robust and accurate mainly because local lighting is also estimated. The final system is evaluated with help of ground-truth and real data. High stability and accuracy is demonstrated also for large environments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.