Abstract

The aim of multiprojector interreflection compensation is to modify input images to remove complex physical stray-light effects (interreflection) from a multiprojector immersive system. This is an important but often ignored problem, which can lead to degradation of a projection image. Traditional methods usually address this problem by computing a matrix inversion. These traditional methods often ignore issue of the clarity of the generated images. In this paper, we describe a method for learning the inversion using a deep convolutional neural network (CNN), named Superresolution Compensation Net (SRCN). SRCN consists of four convolution layers to learn interactions of global light, six convolution layers, and two transposed convolution layers to extract multilevel features and generate compensation images. We also used a subpixel convolution layer to increase the resolution. To make compensation images more consistent with human visual perception, we used a perceptual loss, which compares the differences between feature maps on the VGG16 network. We implemented an immersive projector-camera display prototype (Pro-Cam) and calculated the quality index of the compensation images and the projection results. Our method achieved better results than previous methods in both objective evaluations and subjective visual perception.

Highlights

  • Multiprojector systems are used in virtual reality (VR) systems, exhibitions, and tower simulators

  • When interreflection is serious, such as when there are too many projectors, folding projection surfaces, or curved projection screens, the display image mixed by the light from the projector and the interference light of superimposed reflections leads to poor display image quality. e contrast of the projection display images is low, which disturbs user immersion and becomes an important factor hampering the popularization, application, and development of these systems

  • Our main contributions are as follows: (1) We removed multiprojector interreflection using a learning process, greatly improving multiprojector projector 1 x y L-shaped screen projector 2 x system imaging and simplifying the process of obtaining an light transport matrix (LTM) and calculating its inverse (2) We utilized SR compensation to further improve the definition of compensated images (3) We used a perceptual loss with coefficients in addition to pixelwise loss [17], so that the compensated images are more invariant to changes in pixel space [18, 19] (4) We created a dataset in our projector-camera system (Pro-Cam) environment and made the dataset public e rest of this paper is organized as follows

Read more

Summary

Introduction

Multiprojector systems are used in virtual reality (VR) systems, exhibitions, and tower simulators. System imaging and simplifying the process of obtaining an LTM and calculating its inverse (2) We utilized SR compensation to further improve the definition of compensated images (3) We used a perceptual loss with coefficients in addition to pixelwise loss [17], so that the compensated images are more invariant to changes in pixel space [18, 19] (4) We created a dataset in our Pro-Cam environment and made the dataset public e rest of this paper is organized as follows.

Related Work
Proposed Method
Experiments
Comparison of Different Methods
Methods
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call