Abstract

A new convolutional neural network is proposed for hole filling in the synthesized virtual view generated by depth image-based rendering (DIBR). A context encoder in the network is trained to make predictions of the hole region based on the rendered virtual view, with an adversarial discriminator reducing the errors and producing sharper and more precise result. A texture network in the end of the framework extracts the style of the image and achieves a natural output which is closer to reality. The experiment results demonstrate both subjectively and objectively that the proposed method obtain better 3D video quality compared to previous methods. The average peak signal-to-noise ratio (PSNR) increases by 0.36 dB.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.