Abstract

Recently, learned video compression has attracted copious research attention. However, among the existing methods, the motion used for alignment is limited to one hypothesis only, leading to inaccurate motion estimation, especially for the complicated scenes with complex movements. Motivated by multiple hypotheses philosophy in traditional video compression, we develop the multiple hypotheses based motion compensation for the learned video compression, in an effort to enhance the motion compensation efficiency by providing diverse hypotheses with efficient temporal information fusion. In particular, the multiple hypotheses module which produces multiple motions and warped features for mining sufficient temporal information, is proposed to provide various hypotheses inferences from the reference frame. To utilize these hypotheses more copiously, the hypotheses attention module is adopted by introducing the channel-wised squeeze-and-excitation layer and the multi-scale network. In addition, the context combination is employed to fuse the weighted hypotheses to generate effective contexts with powerful temporal priors. Finally, the valid contexts are used for promoting the compression efficiency by merging weighted warped features. Extensive experiments show that the proposed method can significantly improve the rate-distortion performance of learned video compression. Compared with the state-of-the-art method for end-to-end video compression, over 13% bit rate reductions on average in terms of PSNR and MS-SSIM can be achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call