Abstract

In this paper, we present a fusion camera system combining one time-of-flight depth camera and two video cameras to generate multi-view video sequences. In order to obtain the multi-view video using the fusion camera system for 3D displays, we capture a stereo video using a pair of video cameras and a single view depth video with the depth camera. After performing a 3D warping operation for the depth video to obtain an initial depth map at each viewpoint, we refine it using segment-based stereo matching. To reduce mismatched depth values along object boundaries, we detect moving objects using color difference between frames. Finally, we recompute the depth value of each pixel in every segment using stereo matching with a new cost function. Experimental results show that the proposed fusion system produces multi-view video sequences with accurate depth maps, especially along the boundaries of objects. Therefore, it is suitable for generating more natural 3D views for 3D displays than previous works.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.