Abstract

With the advancement of displaying technologies, virtual viewpoint video needs to be synthesized from adjacent viewpoints to provide immersive perceptual viewing experience of a scene. View synthesized techniques suffer poor rendering quality due to holes created by occlusion in the warping process. Currently, spatial and temporal correlation techniques are used to improve the quality of the synthesized view. However, spatial correlation e. g. inpainting and inverse mapping (IM) techniques cannot fill holes efficiently due to low spatial correlation in the edge between foreground and background pixels. On the other hand, the temporal correlation among already synthesized frames through learning by Gaussian mixture modelling (GMM) may fill occluded areas efficiently. However, there are no frames for GMM learning when the user switches view instantly. To address the aforementioned issues, in the proposed view synthesis technique, we apply GMM on the adjacent viewpoint videos. Then, we utilize the number of GMM models to refine pixel intensities of the synthesized view by using a weighting factor between pixel intensities in GMM models and warped images. This technique provides a better pixel correspondence, which improves 0.47~0.58dB PSNR compared to the IM technique.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call