Abstract

This paper proposes a novel method that uses temporal reference pictures to improve the quality of view synthesis prediction. Existing view synthesis prediction schemes generate image signals from just only inter-view reference pictures. However, there are many types of signal mismatch like illumination, color, and focus mismatch across views, and these mismatches decrease the prediction performance. The proposed method synthesizes an initial view using the existing depth-based warping, and then uses the initial synthesized view as the templates needed to derive fine motion vectors. The initial synthesized view is then updated by using the derived motion vectors and temporal reference pictures which yields the prediction output. Experiments show that the proposed method can improve the quality of view synthesis about 14 dB for ballet and 4 dB for breakdancers at high bitrate, and reduces the bitrate by about 2% relative to conventional view synthesis prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call