Abstract

Abstract. Mobile Augmented Reality (MAR) aligns toward current technological advances with more intuitive interfaces, realistic graphic content and flexible development processes. The case of overlaying precise 3D representations exploits their high penetration to induct users to a world where data are perceived as real counterparts. The work presented in this paper integrates web-like concepts with hybrid mobile tools to visualize high-quality and complex 3D geometry on the real environment. The implementation involves two different operational mechanisms: anchors and location-sensitive tracking. Three scenarios, for indoors and outdoors are developed using open-source and with no limit on distribution SDKs, APIs and rendering engines. The JavaScript-driven prototype consolidates some of the overarching principles of AR, such as pose estimation, registration and 3D tracking to an interactive User Interface under the scene graph concept. The 3D overlays are shown to the end user i) on top of an image target ii) on real-world planar surfaces and iii) at predefined points of interest (POI). The evaluation in terms of performance, rendering efficacy and responsiveness is made through various testing strategies: system and trace logs, profiling and ‗end-to-end‖ tests. The final benchmarking elucidates the slow and computationally intensive procedures induced by the big data rendering and optimization patterns are proposed to mitigate the performance impact to the non-native technologies.

Highlights

  • Immersive computing exploits machine learning, sensors technology and computer vision techniques to affect and alter the intuitive perception and cognition

  • The realized prototype proves that a low-cost Augmented Reality (AR) workflow with open-source components can serve the most of use cases including vision-based and hybrid tracking methods

  • The adaptive to various situations display can be enriched with interactive and responsive to user‘s needs features and future research could delve into Viro‘s features detection

Read more

Summary

Introduction

Immersive computing exploits machine learning, sensors technology and computer vision techniques to affect and alter the intuitive perception and cognition. Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) represent scalable immersion adaptations of the real and virtual world. Each technology enhances user‘s immediate context through digital data in a divergent way. From the computer-generated 3D simulations and artificial senses of VR, more responsive to physical space experiences are attained by the latter technologies. Digital content is superimposed in the dynamic and ever-changing live view of the camera facilitating knowledge dissemination and emotional engagement. When this content is interactive-driven and spatially aware, user transits to the state of MR and the served purpose is even more meaningful

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call