Abstract

Video frame interpolation algorithm aims at synthesizing intermediate frame(s) sequence between two consecutive frames, these intermediate frames are both temporally and spatially coherent with input frames and each other. Video frame interpolation is a classic problem in computer vision and has many applications, e.g., frame rate upscaling and slow-motion effect. Most existing approaches are single-frame interpolation and have shown impress performance. However, these approa-ches, which cannot be directly used to synthesize multiple frames at one time, sometimes are user-unfriendly. On the other hand, existing multiple-frame interpolation approaches sometimes lead to higher storage space or computational cost. Therefore, we propose an adaptive variable frame inter-polation method, which evaluates the amount of frame to be generated according to the estimated motion, to reduce the storage space and improve the generation efficiency. In addition, we add edge loss to the loss function, expecting better results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call