Abstract
As of its open-loop structure and good decorrelation capability, motion-compensated temporal filtering (MCTF) provides a robust basis for highly-efficient scalable video coding. Combining MCTF with spatial wavelet decomposition and embedded quantization results in a 3D wavelet video compression system, providing temporal, spatial, and SNR scalability. Recent results indicate that the overall coding performance of these systems can be maximized if temporal filtering is performed in spatial domain (t+2D approach). However, as compared to non-scalable video coding, the performance of t+2D systems may not be satisfactory if spatial scalability needs to be provided. One important reason for this fact is the problem of spatial scalability of motion information. In this paper we present a conceptually new approach for t+2D-based video compression with spatially scalable motion information. We call our approach overcomplete MCTF since multiple spatial-domain temporal filtering operations are needed to generate the lower spatial scales of the temporal subbands. Specifically, the encoder performs MCTF-based generation of reference sequences for the coarser spatial scales. We find that the newly generated reference sequences are of satisfactory quality. Compared to the conventional t+2D system, our approach allows for optimization of the reconstruction quality at lower spatial scales while having reduced impact on the reconstruction quality at high spatial scales/bitrates.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.