Abstract

Frame interpolation finds many applications in video applications, including frame rate up-conversion and video compression. Deep learning-based methods have been proposed for frame interpolation, but a long runtime is typically required to achieve good visual quality. In this paper, we introduce an efficient frame interpolation method based on a modified generative adversarial network. The proposed framework consists of a generator with a pair of down–up scale modules, where the down-scaled-input module attempts to capture the overall structure of the scene while the original-scale-input module aims to restore finer textures. Skip connections and an input processing block are further incorporated into the minimal two-scale generator design to expedite processing without losing image features. The difference between the synthesized frame and the ground truth is measured by a combined loss function, including one adversarial loss and three reconstruction losses. Compared to the state-of-the-art motion compensation and deep-learning based frame interpolation approaches, the proposed framework achieves the most satisfactory trade-off between the synthesis quality and runtime.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.