Abstract

We propose a method for computing a depth map at interactive rates from a set of closely spaced calibrated video cameras and a Time-of-Flight (ToF) camera. The objective is to synthesize free viewpoint videos in real-time. All computations are performed on the graphics processing unit, leaving the CPU available for other tasks. Depth information is computed from color camera data in textured regions and from ToF data in textureless ones. The trade-off between these two sources is determined locally based on the reliability of the depth estimates obtained from the color images. For this purpose, a confidence measure taking into account the shape of the photo-consistency score as a function of depth is used. The final depth map is computed by minimizing a cost function. This approach offers a significant time savings relative to other methods that apply denoising to the photo-consistency score maps, obtained at every depth, and importantly, still obtains acceptable quality of the rendered image.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call