Abstract

This work proposes the neural reference synthesis (NRS) to generate high-fidelity reference block for motion estimation and motion compensation (MEMC) in inter frame coding. The NRS is comprised of two submodules: one for reconstruction enhancement and the other for reference generation. Although numerous methods have been developed in the past for these two submodules using either handcrafted rules or deep convolutional neural network (CNN) models, they basically deal with them separately, resulting in limited coding gains. By contrast, the NRS proposes to optimize them collaboratively. It first develops two CNN-based models, namely EnhNet and GenNet. The EnhNet only uses spatial correlations within the current frame for reconstruction enhancement and the GenNet is then augmented by further aggregating temporal correlations across multiple frames for reference synthesis. However, a direct concatenation of EnhNet and GenNet without considering the complex temporal reference dependency across inter frames would implicitly induce iterative CNN processing and cause the data overfitting problem, leading to visually-disturbing artifacts and oversmoothed pixels. To tackle this problem, the NRS applies a new training strategy to coordinate the EnhNet and GenNet for more robust and generalizable models, and also devises a lightweight multi-level R-D (rate-distortion) selection policy for the encoder to adaptively choose reference blocks generated from the proposed NRS model or conventional coding process. Our NRS not only offers state-of-the-art coding gains, e.g., >10% BD-Rate (Bjøntegaard Delta Rate) reduction against the High Efficiency Video Coding (HEVC) anchor for a variety of common test video sequences encoded at a wide bit range in both low-delay and random access settings, but also greatly reduces the complexity relative to existing learning-based methods by utilizing more lightweight DNNs. All models are made publicly accessible at https://github.com/IVC-Projects/NRS for reproducible research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.