Abstract

Several questions concerning the performance in ADPCM systems of sequentially adaptive backward predictors based on the adaptive gradient and Kalman-type algorithms are addressed. Using a Jayant-type adaptive quantizer, it is shown that for bit rates less than 16 kbits/s with second order predictors and for bit rates less 18.4 kbits/s with fourth order predictors, backward-adaptive predictors have a definite performance advantage over fixed-tap predictors, since the latter may cause system divergence. For higher bit rates, the adaptive gradient predictor offers no advantage over a second order fixed-tap predictor; however, the Kalman predictor produces a substantial performance increment over the fixed-tap predictor. It is also shown that the Kalman predictor maintains a significant advantage over the adaptive gradient predictor for all bit rates from 12.8 to 32 kbits/s. Finally, it is noted that the ADPCM system divergence that occurs for fixed, multiple-tap predictors and a Jayant quantizer is caused by predictor mismatch with the input signal coupled with the infinite quantizer memory. This problem can be corrected by a modification to the quantizer adaptation logic.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.