Abstract

Video prediction aims to predict the upcoming future frames by modeling the complex spatiotemporal dynamics from given videos. However, most existing video prediction methods still perform sub-optimal in generating high-visual-quality future frames. The reasons behind that are: 1) these methods struggle to reason accurate future motion due to extracting insufficient spatiotemporal correlations from the given frames. 2) The state transition units in the previous works are complex, which inevitably results in the loss of spatial details. When the videos contain variable motion patterns (e.g. rapid movement of objects) and complex spatial information (e.g. texture details), blurring artifacts and local absence of objects may occur in the predicted frames. In this work, to predict more accurate future motion and preserve more details information, we propose an end-to-end trainable dual-branch video prediction framework, spatiotemporal Dynamics and Detail Aware Network (DANet). Specifically, to predict future motion, we propose a SpatioTemporal Memory (ST-Memory) to learn motion evolution in the temporal domain from the given frames by transmitting the deep features along a zigzag direction. To obtain adequate spatiotemporal correlations among frames, the MotionCell is constructed in the ST-Memory to facilitate the expansion of the receptive field. The spatiotemporal attention is utilized in the ST-Memory to focus on the global variation of given frames. Additionally, to preserve useful spatial details, we design the Spatial Details Memory (SD-Memory) to capture the global and local dependencies of the given frames at the pixel level. Extensive experiments conducted on three public datasets for both synthetic and natural demonstrate that the DANet has excellent performance for video prediction compared with state-of-the-art methods. In brief, DANet outperforms the state-of-the-art methods in terms of MSE by 3.1, 1.0×10−2 and 14.3 × 10 on three public benchmark datasets, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.