Abstract

The increasing capability of computing power and mobile graphics has made possible the release of self-contained augmented reality (AR) headsets featuring efficient head-anchored tracking solutions. Ego motion estimation based on well-established infrared tracking of markers ensures sufficient accuracy and robustness. Unfortunately, wearable visible-light stereo cameras with short baseline and operating under uncontrolled lighting conditions suffer from tracking failures and ambiguities in pose estimation. To improve the accuracy of optical self-tracking and its resiliency to marker occlusions, degraded camera calibrations, and inconsistent lighting, in this work we propose a sensor fusion approach based on Kalman filtering that integrates optical tracking data with inertial tracking data when computing motion correlation. In order to measure improvements in AR overlay accuracy, experiments are performed with a custom-made AR headset designed for supporting complex manual tasks performed under direct vision. Experimental results show that the proposed solution improves the head-mounted display (HMD) tracking accuracy by one third and improves the robustness by also capturing the orientation of the target scene when some of the markers are occluded and when the optical tracking yields unstable and/or ambiguous results due to the limitations of using head-anchored stereo tracking cameras under uncontrollable lighting conditions.

Highlights

  • The primary goal of visual augmented reality (AR) technology is to enrich the visual perception of the surrounding space by overlaying three-dimensional (3D) computer-generated elements on it in a spatially realistic manner

  • We aim to prevent substantial distortions in the patterns of horizontal and vertical disparities between the stereo cameras frames presented on the displays of the headset, and we try to pursue a quasi-orthostereoscopic perception of the scene under video see-through (VST) view without any perspective conversion of the camera frames [36]

  • By running the AR application with the recorded video stream of the stereo cameras, we were able to collect the pose of the target scene determined through the optical tracking algorithm described in Section 3.2 for each recorded frame

Read more

Summary

Introduction

The primary goal of visual augmented reality (AR) technology is to enrich the visual perception of the surrounding space by overlaying three-dimensional (3D) computer-generated elements on it in a spatially realistic manner. In order to satisfy the locational realism of the AR view and achieve an accurate spatial alignment between real-world scene and virtual elements, the process of image formation of the virtual content must be the same of the real-world scene [3]. The online estimation of the pose of the target scene in relation to the stereo cameras dictates the proper placement of the virtual objects in the AR scene (i.e., the extrinsic parameters). This task is typically accomplished by means of a tracking device that provides in real time the pose of the target scene to be augmented with respect to the real viewpoint. The real viewpoint corresponds to one or two display-anchored camera(s) in video see-through (VST) displays, and to the user’s eye(s) in optical see-through (OST) displays

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.