Abstract

Our target is development of a view synthesis system that includes the entire process from capturing of multi-view videos to synthesize virtual view in real-time. Depth estimation of the target scene is indispensable for view synthesis from multi-view videos. In this paper, we improved the depth estimation method we had developed in a previous work, where an active illumination technique was combined with an efficient layer based algorithm. More specifically, we proposed an adaptive space-time filtering for the cost volumes constructed for depth estimation. The adaptive space-time filtering adapts its shape for each pixel automatically according for depth estimation, resulting in higher quality of the depth estimation especially in dynamic scenes. Our method was tested on a system consisting of 16 video cameras and a Digital Light Processing (DLP) projector to show its effectiveness. We achieved higher quality depth estimation, resulting in higher quality virtual view synthesis with a nearly real-time frame rate.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.