Abstract

Model compression has drawn great attention in deep learning community. A core problem in model compression is to determine the layer-wise optimal compression policy, e.g., the layer-wise bit-width in network quantization. Conventional hand-crafted heuristics rely on human experts and are usually sub-optimal, while recent reinforcement learning based approaches can be inefficient during the exploration of the policy space. In this article, we propose Bayesian automatic model compression (BAMC), which leverages non-parametric Bayesian methods to learn the optimal quantization bit-width for each layer of the network. BAMC is trained in a one-shot manner, avoiding the back and forth (re)-training in reinforcement learning based approaches. Experimental results on various datasets validate that our proposed methods can find reasonable quantization policies efficiently with little accuracy drop for the quantized network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call