Abstract

Multi-view depth is crucial for describing positioning information in 3D space for virtual reality, free viewpoint video, and other interaction- and remote-oriented applications. However, in cases of lossy compression for bandwidth limited remote applications, the quality of multi-view depth video suffers from quantization errors, leading to the generation of obvious artifacts in consequent virtual view rendering during interactions. Considerable efforts must be made to properly address these artifacts. In this paper, we propose a cross-view multi-lateral filtering scheme to improve the quality of compressed depth maps/videos within the framework of asymmetric multi-view video with depth compression. Through this scheme, a distorted depth map is enhanced via non-local candidates selected from current and neighboring viewpoints of different time-slots. Specifically, these candidates are clustered into a macro super pixel denoting the physical and semantic cross-relationships of the cross-view, spatial and temporal priors. The experimental results show that gains from static depth maps and dynamic depth videos can be obtained from PSNR and SSIM metrics, respectively. In subjective evaluations, even object contours are recovered from a compressed depth video. We also verify our method via several practical applications. For these verifications, artifacts on object contours are properly managed for the development of interactive video and discontinuous object surfaces are restored for 3D modeling. Our results suggest that the proposed filter outperforms state-of-the-art filters and is suitable for use in multi-view color plus depth-based interaction- and remote-oriented applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call