Abstract
New data formats including 2D video and the corresponding depth maps enable new video applications in which virtual views can be rendered, such as 3DTV and free-viewpoint video (FVV). Different from video frames, depth maps typically consist of homogeneous areas (with no textures) separated by sharp edges representing depth value changes such as between foreground and background. Conventional video coding techniques with transforms followed by quantization typically result in large artifacts along such sharp edges. To suppress these coding artifacts while preserving edges, we propose in this paper a novel filtering method for depth coding, joint trilateral filter. The main contribution in the proposed filter design is the utilization of edge information in the collocated video frame as well as in the depth map. The filtering weights are determined by the following three factors: a domain (spatial) filter which measures the proximity of pixel positions, and two range filters. One range filter takes into account the similarity among depth samples and the other one considers the similarity among the collocated pixels in the video frame. By replacing the deblocking filter in H.264/AVC with the proposed trilateral filter, simulation results demonstrate up to 0.8 dB gain in rendering quality at given bitrate for depth signal.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.