Abstract

Inter prediction serves as the foundation of prediction based hybrid video coding framework. The state-of-the-art video coding standards employ the reconstructed frames as the references, and the motion vectors which convey the relative position shift between the current block and the prediction block are explicitly signalled in the bitstream. In this paper, we propose a high efficient inter prediction scheme by introducing a new methodology based on virtual reference frame, which is effectively generated with the deep neural network such that the motion data does not need to be explicitly signalled. In particular, the high quality virtual reference frame is generated with the deep learning based frame rate up-conversion (FRUC) algorithm from two reconstructed bi-prediction frames. Subsequently, a novel CTU level coding mode termed as direct virtual reference frame (DVRF) mode, is proposed to adaptively compensate for the current to-be-coded block in the sense of rate-distortion optimization (RDO). The proposed scheme is integrated into the HM-16.6 software, and experimental results demonstrate significant superiority of the proposed method, which provides more than 3% coding gains on average for HEVC test sequences.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.