Abstract

Recently, binary neural networks (BNNs) have been extensively studied since they can address the challenge of large memory footprint and power consumption caused by floating-point convolutional neural networks (CNNs) while maintaining tolerable accuracy. There have been many efforts designing BNN hardware accelerators and showed very well results. But even that will require huge improvements in reduction of memory cost and energy efficiency. In this paper, we first propose the bagged binary neural network accelerator (BBNA), which is a fully pipelined BNN accelerator with bagging ensemble unit for aggregating several BNN pipelines to achieve better model accuracy. In other words, the proposed architecture provides an opportunity for embedded devices to obtain acceptable accuracy with smaller ensemble BNNs. As a result, compared to other works, our design achieves 1.9x better energy efficiency with better performance, and the ensemble method saves more than 79% and 94% memory footprint, respectively, with nearly accuracy on MNIST dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call