Abstract

Binary neural networks (BNNs) are promising on resource-constrained devices because they reduce memory consumption and accelerate inference effectively. However, they are still potential on performance improvement. Prior studies attribute performance degradation of BNNs to limited representation ability and gradient mismatch. In this paper, we find that it also results from the mandatory representation of small full-precision auxiliary weights to large values. To tackle with this issue, we propose an approach dubbed as Diluted Binary Neural Network (DBNN). Besides avoiding mandatory representation effectively, the proposed DBNN also alleviates sign flip problem to a large extent. For activations, we jointly minimize quantization error and maximize information entropy to develop the binarization scheme. Compared with existing sparsity-binarization approaches, DBNN trains network from scratch without other procedures and achieves larger sparsity. Experiments on several datasets with various networks demonstrate the superiority of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call