Abstract

Despite the growing availability of self-contained augmented reality head-mounted displays (AR HMDs) based on optical see-through (OST) technology, their potential applications across highly challenging medical and industrial settings are still hampered by the complexity of the display calibration required to ensure the locational coherence between the real and virtual elements. The calibration of commercial OST displays remains an open challenge due to the inaccessibility of the user’s perspective and the limited hardware information available to the end-user. State-of-the-art calibrations usually comprise both offline and online stages. The offline calibration at a generic viewpoint provides a starting point for the subsequent refinements and it is crucial. Current offline calibration methods either heavily rely on the user-alignment or require complicated hardware calibrations, making the overall procedure subjective and/or tedious. To address this problem, in this work we propose two fully alignment-free calibration methods with less complicated hardware calibration procedures compared with state-of-the-art solutions. The first method employs an eye-replacement camera to compute the rendering camera’s projection matrix based on photogrammetry techniques. The second method controls the rendered object position in a tracked 3D space to compensate for the parallax-related misalignment for a generic viewpoint. Both methods have been tested on Microsoft HoloLens 1. Quantitative results show that the average overlay misalignment is fewer than 4 pixels (around 1.5 mm or 9 arcmin) when the target stays within arm’s reach. The achieved misalignment is much lower than the HoloLens default interpupillary distance (IPD)-based correction, and equivalent but with lower variance than the Single Point Active Alignment Method (SPAAM)-based calibration. The two proposed methods offer strengths in complementary aspects and can be chosen according to the user’s needs. We also provide several update schemes for the two methods that can be integrated for an on-line viewpoint-dependent refinement of the calibration parameters. Both methods have been integrated into a Unity3D-based framework and can be directly applied to Unity-assisted devices.

Highlights

  • Visual Augmented Reality (AR), which supplements the user-perceived reality with computer-generated information, is quickly becoming a powerful tool to improve the experience of visual assistance

  • Successful deployment of Optical See-Through (OST)-head-mounted displays (HMDs) across highly challenging medical and industrial settings is still hampered by the complexity of the display calibration procedures required to ensure locational coherence between the real and the virtual elements [9]

  • CAMERA-BASED REDUCTION OF THE PARALLAX-RELATED MISALIGNMENT we extend the algorithm introduced in Section II-C to a camera-based calibration routine that can be applied to any commercial OST HMD

Read more

Summary

Introduction

Visual Augmented Reality (AR), which supplements the user-perceived reality with computer-generated information, is quickly becoming a powerful tool to improve the experience of visual assistance. The quality of the first step calibration is paramount as it provides a starting point for the subsequent viewpointdependent refinements For such offline stage calibration, some methods rely on multiple user alignments between real and virtual features [22], [23]. Those alignment-based methods can be implemented in hardware but they are tedious (i.e. several alignments are required) and subjective (i.e. the error increases with the poor-quality alignments performed by inexperienced users). Alignmentfree methods such as the display-relative calibration (DRC)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call