Abstract
In this paper, we present a fusion camera system combining one time-of-flight depth camera and two video cameras to generate multi-view video sequences. In order to obtain the multi-view video using the fusion camera system for 3D displays, we capture a stereo video using a pair of video cameras and a single view depth video with the depth camera. After performing a 3D warping operation for the depth video to obtain an initial depth map at each viewpoint, we refine it using segment-based stereo matching. To reduce mismatched depth values along object boundaries, we detect moving objects using color difference between frames. Finally, we recompute the depth value of each pixel in every segment using stereo matching with a new cost function. Experimental results show that the proposed fusion system produces multi-view video sequences with accurate depth maps, especially along the boundaries of objects. Therefore, it is suitable for generating more natural 3D views for 3D displays than previous works.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have