Abstract
Wyner-Ziv (WZ) video coding shifts the burden of complex calculations from the encoder to the decoder, making it suitable for video coding scenarios with limited resources. Motivated by deep learning has shown superior performance over traditional methods, this paper proposes deep WZ video coding with the help of auxiliary hierarchical features in the decoder. It uses an autoencoder to encode WZ frames. In the decoder side, an inter-frame correlation model called SI-Net is employed to enhance WZ frame quality with Key frames. The auxiliary hierarchical features are extracted from Key frames through multi-level downsampling and employed in autoencoder to optimize feature extraction, avoiding gradient disappearance caused by deep networks. Since the auxiliary hierarchical features of Key frames describe spatial information and expand the network’s perception of video frame features, a high-quality WZ frame reconstructed can be obtained. Compared with previous work, our method shows obvious superiority on four video datasets with different degrees of motion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.