Abstract

Deep learning-based video coding methods have demonstrated superior performance compared to classical video coding standards in recent years. The vast majority of the existing deep video coding (DVC) networks are based on convolutional neural networks (CNNs), and their main drawback is that since CNNs are affected by the size of the receptive field, they cannot effectively handle long-range dependencies and local detail recovery. Therefore, how to better capture and process the overall structure as well as local texture information in the video coding task is the core issue. Notably, the transformer employs a self-attention mechanism that captures dependencies between any two positions in the input sequence without being constrained by distance limitations. This is an effective solution to the problem described above. In this paper, we propose end-to-end transformer-based adaptive video coding (TAVC). First, we compress the motion vector and residuals through a compression network built on the vision transformer (ViT) and design the motion compensation network based on ViT. Second, based on the requirement of video coding to adapt to different resolution inputs, we introduce a position encoding generator (PEG) as adaptive position encoding (APE) to maintain its translation invariance across different resolution video coding tasks. The experiment shows that for multiscale structural similarity index measurement (MS-SSIM) metrics, this method exhibits significant performance gaps compared to conventional engineering codecs, such as [Formula: see text], [Formula: see text], and VTM-15.2. We also achieved a good performance improvement compared to the CNN-based DVC methods. In the case of peak signal-to-noise ratio (PSNR) evaluation metrics, TAVC also achieves good performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.