Abstract

Block floating point quantization (BFPQ) exploits signal statistics so that one common exponent is shared among a block of data. The output signal-to-quantization-noise ratio (SQNR) may drop due to the increase in quantization error resulted from the increment of the exponent, especially for the non-uniformly distributed input signals. The tunable BFPQ is then proposed. With the aid of the tuning parameter to enlarge the thresholds for deciding the exponent and fractional exponent, the quantization error and saturation error can be balanced and thus the output SQNR can be sustained as high as possible. Both the analytic and simulated results are provided to verify the effectiveness of the tuning parameter for Gaussian-distributed and Laplacian-distributed signals. The improvement in output SQNR compared to the conventional BFPQ is also shown. Finally, the concept is implemented to support real-time high-speed compression for high-resolution synthetic aperture radar image applications. We demonstrate that the tunable BFPQ can be accomplished with only a small overhead but brings substantial performance gain, especially for large data blocks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call