Abstract

Mobile Augmented Reality (MAR) systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN) algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

Highlights

  • GIS technologies are becoming widely used in a growing number of application scenarios, more attention has focused on the display and visualization of spatial information

  • Flexibility and realism in GIS visualizations are becoming ever more demanding, as the volume and complexity of this information expands; AR-GIS is a response to these challenges

  • In order to achieve realistic visual effects and coherent rendering, camera pose tracking techniques are necessary for accurate understanding of spatial relationships in AR-GIS

Read more

Summary

Introduction

GIS technologies are becoming widely used in a growing number of application scenarios, more attention has focused on the display and visualization of spatial information. In order to achieve realistic visual effects and coherent rendering, camera pose tracking techniques are necessary for accurate understanding of spatial relationships in AR-GIS. Precise tracking of the camera within an augmented environment is required to achieve proper alignment of the virtual objects to their real-world counterparts and create a rich user experience [1]. AR-GIS renderings, require exact positions of objects appearing in target scenes, consistent with the real world; for example, desks must be on the floor, or pictures must hang on the wall. The goal of this paper is to enable realistic augmented user experiences in 3D scenes through 3D scene understanding and indoor scene tracking that can properly integrate and deal with the limitations of AR-GIS visualization

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call