Abstract
The Affine Motion Estimation (AME) was introduced in the Versatile Video Coding (VVC) standard to allow for the detection of non-translational transformations during inter-frame prediction. Although providing important coding efficiency gains, this new tool represents 43% of the motion estimation (ME) complexity. However, an analysis over the AME step shows that the Affine motion vectors are often generated without resulting in the best ME prediction. This paper proposes a AME early search termination based on supervised machine learning. Six Random Forest models were trained with features obtained during the encoding process to accurately predict whether the AME step should be executed, partially executed or skipped, avoiding unnecessary calculations. As result, the proposed solution achieves an average time saving of 46.94% in the AME step with a coding efficiency loss of only 0.18%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.