Abstract

The process of building a Virtual Reality (VR) environment from images involves several steps: choose experimental conditions (scene, camera, trajectory, weather), take the images, reconstruct a textured 3D model thanks to a photogrammetry software, and import the 3D model into a game engine. This paper focuses on a postprocessing technique for the photogrammetry step, mostly for outdoor environments that cannot be reconstructed using an unmanned aerial vehicle. As visualization applications (including VR) need a 3D model with a known vertical direction, a method is introduced to compute it. The method is based on 3D principal component analysis and a 2D Hough transform. In the experiments, we first reconstruct both man-made and natural immersive environments using a helmet-held 360 camera, then we import the 3D models in good coordinate systems (i.e. with a vertical axis and a plausible scale) into Unity, and finally we use VR headsets to explore the scenes like a pedestrian. We also experiment on scanner data and show that our method is competitive with previous work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call