Abstract
The main issue when dealing with the non-adaptive scalar quantizers is their sensitivity to variance-mismatch, the effect that occurs when the data variance differs from the one used for the quantizer design. In this paper, we consider the influence of that effect in low-rate (2-bit) uniform scalar quantization (USQ) of Laplacian source and also we propose adequate measure to suppress it. Particularly, the approach we propose represents the upgraded version of the previous approaches used to improve performance of the single quantizer. It is based on dual-mode quantization that combines two 2-bit USQs (with adequately chosen parameters) to process input data, selected by applying the special rule. Analysis conducted in theoretical domain has shown that the proposed approach is less sensitive to variance-mismatch, making the dual-mode USQ more efficient in terms of robustness than the single USQ. Also, a gain is achieved compared to other 2-bit quantizer solutions. Experimental results are also provided for quantization of weights of the multi-layer perceptron (MLP) neural network, where good matching with the theoretical results is observed. Due to these achievements, we believe that the solution we propose can be a good choice for compression of non-stationary data modeled by Laplacian distribution, such as neural network parameters.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.