Abstract

In this paper, we introduce a cost-efficient framework that renders a realistic representation of a physical environment using panoramic videos. The framework includes the following main modules: 1) video capturing and stitching, 2) camera trajectory estimation in the panoramic video using computer vision techniques, and 3) generation of virtual experiences using annotations that are integrated into the panoramic video. To quantify the framework performance in terms of processing time, we conducted experiments in three different large environments using a 3.5GHz desktop computer for processing. We show that the proposed framework processing time is less than a minute per meter of the recorded path in the physical environment which is a significant reduction compared to traditional 3D modeling approaches. While the cost savings are substantial, the framework enables the users to explore the virtual environment only along the recorded paths. The proposed framework provides an opportunity to develop applications (e.g. simulation-based training) with limited budgets and still meet the requirement of a very accurate representation of the physical indoor environment.

Highlights

  • In this paper, we introduce a cost-efficient virtual environment generation framework using annotated panoramic videos and computer vision techniques

  • In spite of the time-consuming and costly nature of using these computer-aided design (CAD) tools, they are used in cases the game developers have sufficient funding and need to generate new environments that do not necessarily reflect an existing physical environment

  • Since consumer-level depth sensors have a limited detection range and advanced depth sensors are very expensive compared to RGB cameras, we only investigated visual Simultaneous Localization and Mapping (vSLAM) solutions implemented using RGB cameras. vSLAM has attracted more attention than the others in recent years

Read more

Summary

INTRODUCTION

We introduce a cost-efficient virtual environment generation framework using annotated panoramic videos and computer vision techniques. We propose a cost-efficient framework to generate a virtual environment using panoramic videos that retain the realistic visual experience of a physical environment. While the two VE generation approaches presented above have important applications and have significant advantages (e.g. can represent nonrealistic VEs and users can experience the entire VE), the proposed approach is suitable for cost-constrained applications in which the VE needs to closely reflect the real physical environments. One such application is simulationbased training which occurs in real physical environments (e.g. indoor venues).

ARCHITECTURE OVERVIEW
CAMERA TRAJECTORY ESTIMATION
PERFORMANCE ANALYSIS
GENERATION OF VIRTUAL EXPERIENCES
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.