In the quest for computational efficiency, binary neural networks (BNNs) have emerged as a promising paradigm, offering significant reductions in memory footprint and computational latency. In traditional BNN implementation, the first and last layers are typically full-precision, which causes higher logic usage in field-programmable gate array (FPGA) implementation. To solve these issues, we introduce a novel approach named Ponte (Represent Totally Binary Neural Network Toward Efficiency) that extends the binarization process to the first and last layers of BNNs. We challenge the convention by proposing a fully binary layer replacement that mitigates the computational overhead without compromising accuracy. Our method leverages a unique encoding technique, Ponte::encoding, and a channel duplication strategy, Ponte::dispatch, and Ponte::sharing, to address the non-linearity and capacity constraints posed by binary layers. Surprisingly, all of them are back-propagation-supported, which allows our work to be implemented in the last layer through extensive experimentation on benchmark datasets, including CIFAR-10 and ImageNet. We demonstrate that Ponte not only preserves the integrity of input data but also enhances the representational capacity of BNNs. The proposed architecture achieves comparable, if not superior, performance metrics while significantly reducing the computational demands, thereby marking a step forward in the practical deployment of BNNs in resource-constrained environments.
Read full abstract