Abstract

In this paper, we focus on the problem of video prediction, i.e., future frame prediction. Most state-of-the-art techniques focus on synthesizing a single future frame at each step. However, this leads to utilizing the model’s own predicted frames when synthesizing multi-step prediction, resulting in gradual performance degradation due to accumulating errors in pixels. To alleviate this issue, we propose a model that can handle multi-step prediction. Additionally, we employ techniques to leverage from view synthesis for future frame prediction, where both problems are treated independently in the literature. Our proposed method employs multiview camera pose prediction and depth-prediction networks to project the last available frame to desired future frames via differentiable point cloud renderer. For the synthesis of moving objects, we utilize an additional refinement stage. In experiments, we show that the proposed framework outperforms state-of-theart methods in both KITTI and Cityscapes datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.