Abstract

Depth range cameras are a promising solution for the 3DTV production chain. The generation of color images with their accompanying depth value simplifies the transmission bandwidth problem in 3DTV and yields a direct input for autostereoscopic displays. Recent developments in plenoptic video-cameras make it possible to introduce 3D cameras that operate similarly to traditional cameras. The use of plenoptic cameras for 3DTV has some benefits with respect to 3D capture systems based on dual stereo cameras since there is no need for geometric and color calibration or frame synchronization. This paper presents a method for simultaneously recovering depth and all-in-focus images from a plenoptic camera in near real time using graphics processing units (GPUs). Previous methods for 3D reconstruction using plenoptic images suffered from the drawback of low spatial resolution. A method that overcomes this deficiency is developed on parallel hardware to obtain near real-time 3D reconstruction with a final spatial resolution of800×600pixels. This resolution is suitable as an input to some autostereoscopic displays currently on the market and shows that real-time 3DTV based on plenoptic video-cameras is technologically feasible.

Highlights

  • Since the introduction of television, a great effort has been made to improve the overall experience for viewers

  • This paper presents a novel approach for obtaining a 3D reconstruction from a scene using a plenoptic video-camera

  • Simultaneous super-resolved depth maps and all-in-focus image estimation solve the spatial resolution drawback of previous techniques based on plenoptic cameras

Read more

Summary

Introduction

Since the introduction of television, a great effort has been made to improve the overall experience for viewers. We present a super-resolution technique [18] to produce depth and all-in-focus images from plenoptic cameras This technique is based on the super-resolution discrete focal stack transform (SDFST) that generates a focal stack that is processed to estimate 3D depth. This estimated depth is used to obtain the all-in-focus image. The multiview stereo algorithm operates on the variance focal stack and is able to obtain a t×t estimation of depth (t = Δun − Δu + 1) with m − 3 different depth values and a t × t all-in-focus image. The main difference between the hierarchical BP and general BP is that hierarchical BP works in a coarse-to-fine manner, first performing BP at the coarsest scale and using the output from the coarser scale to initialize the input for the scale

Implementation on Multi-GPU
Part 1
Results
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call