Abstract

Future frame prediction in video is one of the most important problem in computer vision, and useful for a range of practical applications, such as intention prediction or video anomaly detection. However, this task is challenging because of the complex and dynamic evolution of scene. The difficulty of video frame prediction is to model the inherent spatio-temporal correlation between frames and pose an adaptive and flexible framework for large motion change or appearance variation. In this paper, we construct a deep multi-branch mask network (DMMNet) which adaptively fuses the advantages of optical flow warping and RGB pixel synthesizing methods, i.e., the common two kinds of approaches in this task. In the procedure of DMMNet, we add mask layer in each branch to adaptively adjust the magnitude range of estimated optical flow and the weight of predicted frames by optical flow warping and RGB pixel synthesizing, respectively. In other words, we provide a more flexible masking network for motion and appearance fusion on video frame prediction. Exhaustive experiments on Caltech pedestrian and UCF101 datasets show that the proposed model can obtain favorable video frame prediction performance compared with the state-of-the-art methods. In addition, we also put our model into the video anomaly detection problem, and the superiority is verified by the experiments on UCSD dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call