Abstract

We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. Therefore, refining the depth maps is the main challenging problem in the task. We propose an iterative depth refinement algorithm, including error detection and error correction, to correct errors in depth map. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Then, error pixels are corrected based on sampling local candidates. A trilateral filter that considers intensity, spatial and temporal terms into the filter weighting is applied to enhance the spatial and temporal consistency across frames. So the virtual views can be better synthesized according to the refined depth maps. To combine both warped images, disparity-based view interpolation is introduced to alleviate the translucent artifacts. Finally, a directional filter is applied to reduce the aliasing around the object boundaries to generate multiple high-quality virtual views between the two views. We demonstrate the superior image quality of the synthesized virtual views by using the proposed algorithm over the state-of-the-art view synthesis methods through experiments on benchmarking image and video datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call