Abstract

Video frame interpolation is a challenging problem that involves various scenarios depending on the variety of foreground and background motions, frame rate, and occlusion. Therefore, generalizing across different scenes is difficult for a single network with fixed parameters. Ideally, one could have a different network for each scenario, but this will be computationally infeasible for practical applications. In this work, we propose MetaVFI, an adaptive video frame interpolation algorithm that uses additional information readily available at test time but has not been exploited in previous works. We initially show the benefits of test-time adaptation through simple fine-tuning of a network and then greatly improve its efficiency by incorporating meta-learning. Thus, we obtain significant performance gains with only a single gradient update without introducing any additional parameters. Moreover, the proposed MetaVFI algorithm is model-agnostic which can be easily combined with any video frame interpolation network. We show that our adaptive framework greatly improves the performance of baseline video frame interpolation networks on multiple benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call