Abstract
Research works in novel viewpoint synthesis are based mainly on multiview input images. In this paper, we focus on a more challenging and ill-posed problem that is to synthesize surrounding novel viewpoints from a single image. To achieve this goal, we design a full resolution network to extract fine-scale image features, which contributes to prevent blurry artifacts. We also involve a pretrained relative depth estimation network, thus three-dimensional information is utilized to infer the flow field between the input and the target image. Since the depth network is trained by depth order between any pair of objects, large-scale image features are also involved in our system. Finally, a synthesis layer is used to not only warp the observed pixels to the desired positions but also hallucinate the missing pixels from other recorded pixels. Experiments show that our technique successfully synthesizes reasonable novel viewpoints surrounding the input, while other state-of-the-art techniques fail.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.