Abstract

Single image novel view synthesis allows the generation of target images with different views from a single input image. Pixel generation methods are one of the main approaches for novel view synthesis, with previous methods typically using the input images to infer the target image in the new view. However, only features from input images in the source view might not be sufficient to generate a good target image, especially when only a single input image is available. In this paper, we present a deep learning-based novel view synthesis approach that fuses features from an input and a warped image to collaboratively generate pixels in the new view. The warped image here is an intermediate output generated by projecting pixels of the input image onto the target view via an estimated depth. Since the estimated depth and the generated warped image are not perfect, errors will be introduced when generating target pixels. To alleviate these and to ensure better channel information between the features from input and warped image, channel attention blocks are employed. Experimental results on standard benchmark datasets show that our method produces excellent view synthesis results and outperforms other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call