Abstract

Depth map compression is important for compact “texture-plus-depth” representation of a 3D scene, where texture and depth maps captured from multiple camera viewpoints are coded into the same format. Having received such format, the decoder can synthesize any novel intermediate view using texture and depth maps of two neighboring captured views via depth-image-based rendering (DIBR). In this paper, we combine two previously proposed depth map compression techniques that promote sparsity in the transform domain for coding gain-graph-based transform (GBT) and transform domain sparsification (TDS) - together under one unified optimization framework. The key to combining GBT and TDS is to adaptively select the simplest transform per block that leads to a sparse representation. For blocks without detected prominent edges, the synthesized view's distortion sensitivity to depth map errors is low, and TDS can effectively identify a sparse depth signal in fixed DCT domain within a large search space of good signals with small synthesized view distortion. For blocks with detected prominent edges, the synthesized view's distortion sensitivity to depth map errors is high, and the search space of good depth signals for TDS to find sparse representations in DCT domain is small. In this case, GBT is first performed on a graph defining all detected edges, so that filtering across edges is avoided, resulting in a sparsity count ρ in GBT. We then incrementally add the most important edge to an initial no-edge graph, each time performing TDS in the resulting GBT domain, until the same sparsity count ρ is achieved. Experimentation on two sets of multiview images showed gain of up to 0.7dB in PSNR in synthesized view quality compared to previous techniques that employ either GBT or TDS alone.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.