Abstract

Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. Modern video coding systems are block-based wherein commonality modeling is carried out only from the perspective of the block that need be coded next. In this work, we argue for a commonality modeling approach that can provide a seamless blending between global and local homogeneity information in terms of motion. For this purpose, at first a prediction of the current frame, the frame that need be coded, is generated by performing a two-step discrete cosine basis oriented (DCO) motion modeling. The DCO motion model is employed rather than traditional translational or affine motion model since it has the ability to efficiently model complex motion fields by providing a smooth and sparse representation. Moreover, the proposed two-step motion modeling approach can yield better motion compensation at a reduced computational complexity since an informed guess is designed for initializing the motion search procedure. After that the current frame is partitioned into rectangular regions and the conformance of these regions to the learned motion model is investigated. Depending on the non-conformance to the estimated global motion model, an additional DCO motion model is introduced to increase the local motion homogeneity. In this way, the proposed approach generates a motion compensated prediction of the current frame through the minimization of both global and local motion commonality. Experimental results show an improved rate-distortion performance of a reference high efficiency video coding (HEVC) encoder, specifically up to around 9% savings in bit rate, that employs the DCO prediction frame as a reference frame for encoding the current frame. When compared to the more recent video coding standard, the versatile video coding (VVC) encoder, a bit rate savings of 2.37% is reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call