Abstract

AbstractTo have an accurate augmented reality (AR) in a ubiquitous geospatial information system (UBGIS), camera poses should be accurately estimated. Consumer‐grade sensors of mobile phones do not provide high accuracies. Using artificial markers and image‐based techniques could be a good solution for accurately estimating camera pose parameters. Automatic detection of these markers in very oblique images is challenging. This article aims to combine targets and mobile phone sensors to improve augmented reality precision in visualizing underground infrastructures. To achieve this goal, we propose a new method for automatically recognizing coded targets (CTs) and identifying their centers in oblique images. Then, we estimate the camera pose by computing exterior orientation parameters by combining sensors and CTs for the space restriction procedure. Finally, we visualize underground infrastructures using AR. Comparing the results with the well‐known photogrammetry software Agisoft PhotoScan show that the proposed method is robust, fast, and invariant under different transformations of scaling, rotation, and incidence angle variations. It is also more accurate than the Agisoft with sub‐pixel accuracy (about 0.692 ± 0.141 pixels). Comparing the results of the vision‐based method of calculating exterior orientation parameters with the sensor‐based method demonstrates the precision improvement in camera pose estimation and thus better AR visualization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call