Abstract

View synthesis prediction (VSP) employs a synthesized picture as a reference picture for current-view texture coding, which is an advanced disparity-compensated prediction. However, the picture-based view synthesis demands huge complexity, especially for decoders. Therefore, we propose a block-based in-loop view synthesis scheme which generates VSP samples only for blocks using VSP modes (called target blocks). For a target block, a window in reference view is estimated. Then, pixels within the window are warped to the current view, producing VSP samples for the target block. The proposed method turns the picture-level VSP sample generation into macroblock-level process, and significantly reduces complexity of the VSP module while maintaining coding efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call