Abstract
Binary neural networks (BNNs) are promising on resource-constrained devices because they reduce memory consumption and accelerate inference effectively. However, they are still potential on performance improvement. Prior studies attribute performance degradation of BNNs to limited representation ability and gradient mismatch. In this paper, we find that it also results from the mandatory representation of small full-precision auxiliary weights to large values. To tackle with this issue, we propose an approach dubbed as Diluted Binary Neural Network (DBNN). Besides avoiding mandatory representation effectively, the proposed DBNN also alleviates sign flip problem to a large extent. For activations, we jointly minimize quantization error and maximize information entropy to develop the binarization scheme. Compared with existing sparsity-binarization approaches, DBNN trains network from scratch without other procedures and achieves larger sparsity. Experiments on several datasets with various networks demonstrate the superiority of our approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.