Abstract

Generally speaking, rate scalable video systems today are evaluated operationally, meaning that the algorithm is implemented and the rate-distortion performance is evaluated for an example set of inputs. However, in these cases it is difficult to separate the artifacts caused by the compression algorithm and data set with general trends associated with scalability. In this paper, we derive and evaluate theoretical rate-distortion performance bounds for both layered and continuously rate scalable video compression algorithms which use a single motion-compensated prediction (MCP) loop. These bounds are derived using rate-distortion theory based on an optimum mean-square error (MSE) quantizer, and are thus applicable to all methods of intraframe encoding which use MSE as a distortion measure. By specifying translatory motion and using an approximation of the predicted error frame power spectral density, it is possible to derive parametric versions of the rate-distortion functions which are based solely on the input power spectral density and the accuracy of the motion-compensated prediction. The theory is applicable to systems which allow prediction drift, such as the data-partitioning and SNR-scalability schemes in MPEG-2, as well as those with zero prediction drift such as fine granularity scalability MPEG-4. For systems which allow prediction drift we show that optimum motion compensation is a sufficient condition for stability of the decoding system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.