Abstract

Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space.

Highlights

  • Visual Augmented Reality (AR) technology supplements the user’s perception of the surrounding environment by overlaying contextually relevant computer-generated elements on it so that the real world and the digital elements appear to coexist [1,2]

  • Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support complex manual tasks performed under direct vision

  • Recent literature shows that HMDs are emerging as the most efficient output medium to support complex manual tasks performed under direct vision

Read more

Summary

Introduction

Visual Augmented Reality (AR) technology supplements the user’s perception of the surrounding environment by overlaying contextually relevant computer-generated elements on it so that the real world and the digital elements appear to coexist [1,2]. In visual AR, the locational coherence between the real and the virtual elements is paramount to supplementing the user’s perception of and interaction with the surrounding space [3]. Published research provides glimpses of how AR could dramatically change the way we learn and work, allowing the development of new training paradigms and efficient means to assist/guide manual tasks. AR is capable to dramatically reduce the operators learning curve in performing complex assembly sequences [8,9,10,11] and in improving the overall process task [12]

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.