Abstract

While binarized neural networks (BNNs) have attracted great interest, popular approaches proposed so far mainly exploit the symmetric sign function for feature binarization, i.e., to binarize activations into -1 and +1 with a fixed threshold of 0. However, whether this option is optimal has been largely overlooked. In this work, we propose the Sparsity-inducing BNN (Si-BNN) to quantize the activations to be either 0 or +1, which better approximates ReLU using 1-bit. We further introduce trainable thresholds into the backward function of binarization to guide the gradient propagation. Our method dramatically outperforms the current state-of-the-art, lowering the performance gap between full-precision networks and BNNs on mainstream architectures, achieving the new state-of-the-art on binarized AlexNet (Top-1 50.5%), ResNet-18 (Top-1 62.2%), and ResNet-50 (Top-1 68.3%). At inference time, Si-BNN still enjoys the high efficiency of bit-wise operations. In our implementation, the running time of binary AlexNet on the CPU can be competitive with the popular GPU-based deep learning framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.