Abstract

Augmented reality (AR) applications have a serious problem with the accuracy of the azimuth angle provided by mobile devices. The fusion of the digital magnetic compass (DMC), accelerometer and gyroscope gives the translation and rotation of the observer in 3D space. However, the precision is not always appropriate since DMC is prone to interference when using it near metal objects or electric currents. The silhouette of ridges separates the sky from the terrain and forms the skyline or horizon line in a mountainous scenery. This salient feature can be used for orientation. With the camera of the device and a digital elevation model (DEM) the correct azimuth angle could be determined. This study proposes an effective method to adjust the azimuth by identifying the skyline from an image and matches it with the skyline of the DEM. This approach does not require manual interaction. The algorithm has also been validated in a real-world environment.

Highlights

  • Humans can interpret the environment by processing information that is contained in visible light radiated, reflected, or transmitted by the surrounding objects

  • The first part of this section demonstrates the results on sample images

  • This study proposed an automatic, computer vision-based method for improving the azimuth measured by the unreliable digital magnetic compass (DMC) sensor in mountainous terrain

Read more

Summary

Introduction

Humans can interpret the environment by processing information that is contained in visible light radiated, reflected, or transmitted by the surrounding objects. Computer vision algorithms try to perceive images coming from sensors. Visual localization is a six-dimensional problem of finding the position (longitude, latitude, elevation) and orientation (pan, tilt, roll) from a single geotagged photo. Visual orientation from an image requires that the position of the observer is at least roughly given, the photo is taken not far from the ground, and the camera is approximately horizontal. That means the problem can be reduced to a one-dimensional instance in which the pan angle or in other words, the azimuth need to be determined. Computer vision can help to improve the precision of the sensors by capturing visual clues whose real-world positions are accurately known. The orientation of the observer can be improved, which is critical in AR applications

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call