Binary neural networks (BNNs) that use 1-bit weights and activations have garnered interest as extreme quantization provides low power dissipation. By implementing BNNs as computation-in-memory (CIM), which computes multiplication and accumulations on memory arrays in an analog fashion, namely, analog CIM, we can further improve the energy efficiency to process neural networks. However, analog CIMs are susceptible to process variation, which refers to the variability in manufacturing that causes fluctuations in the electrical properties of transistors, resulting in significant degradation in BNN accuracy. Our Monte Carlo simulations demonstrate that in an SRAM-based analog CIM implementing the VGG-9 BNN model, the classification accuracy on the CIFAR-10 image dataset is degraded to below 50% under process variations in a 28 nm FD-SOI technology. To overcome this problem, we present a variation-aware BNN framework. The proposed framework is developed for SRAM-based BNN CIMs since SRAM is most widely used as on-chip memory; however, it is easily extensible to BNN CIMs based on other memories. Our extensive experimental results demonstrate that under process variation of 28 nm FD-SOI, with an SRAM array size of 128×128, our framework significantly enhances classification accuracies on both the MNIST hand-written digit dataset and the CIFAR-10 image dataset. Specifically, for the CONVNET BNN model on MNIST, accuracy improves from 60.24% to 92.33%, while for the VGG-9 BNN model on CIFAR-10, accuracy increases from 45.23% to 78.22%.
Read full abstract