Abstract

This work is devoted to the problem of restoring realistic rendering for augmented and mixed reality systems. Finding the light sources and restoring the correct distribution of scene brightness is one of the key parameters that allows to solve the problem of correct interaction between the virtual and real worlds. With the advent of such datasets as, "LARGE-SCALE RGB + D," it became possible to train neural networks to recognize the depth map of images, which is a key requirement for working with the environment in real time. Additionally, in this work, convolutional neural networks were trained on the synthesized dataset with realistic lighting. The results of the proposed methods are presented, the accuracy of restoring the position of the light sources is estimated, and the visual difference between the image of the scene with the original light sources and the same scene. The speed allows it to be used in real-time AR/VR systems.

Highlights

  • Augmented and mixed reality systems are being used in many tasks, the incorrect illumination of the virtual world objects may cause discomfort in the perception of the reality, in which objects of the real and virtual worlds are mixed and as a result this limits the time that a person can be in the mixed reality, and further it restricts the practical use of the mixed reality systems in various areas, for example - in education or training

  • This article is devoted to convolutional neural network methods (CNN) for solving the global scientific problem in the field of the physically correct and effective restoration of illumination conditions and optical properties of real-world objects during the synthesis of images of the virtual world

  • Works [11,12,13,14] as well as this work are aimed at restoring lighting for augmented reality systems, but for different purposes and tasks

Read more

Summary

Introduction

Augmented and mixed reality systems are being used in many tasks, the incorrect illumination of the virtual world objects may cause discomfort in the perception of the reality, in which objects of the real and virtual worlds are mixed and as a result this limits the time that a person can be in the mixed reality, and further it restricts the practical use of the mixed reality systems in various areas, for example - in education or training. This work is focused on determining the real power of illumination of the light flux and its position in an environment. For this a manually synthesized sample of images with realistic optical parameters of the medium were used. The sample consists of only 260 images (221 were used for training, and for the test), the neural network at the output classifies with good accuracy the real optical parameters of the illumination of the medium. For the reconstruction of the depth of an environment the "Large-Scale RGB + D Dataset" was used, which was obtained using Kinect v2 and Zed stereo camera and their disparity maps

Related works
Implementation
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call