Abstract

Convolutional Neural Networks (CNN) has been well-studied and widely used in the field of pattern recognition. Many pattern recognition algorithms need features extracted from CNN models to adapt to complex tasks, such as image classification, object detection, natural language processing and so on. However, to deal with more and more complex tasks, modern CNN models are becoming larger and larger, contain large number of parameters and computation, leading to high consumption of memory, computational and power resources during inference. This makes it difficult to run CNN based applications in real time on mobile devices, where memory, computational and power resources are limited. Binarization of neural networks is proposed to reduce memory and computational complexity of CNN. However, traditional implementations of Binary Neural Networks (BNN) follow the conventional im2col-based convolution computation flow, which is widely used in floating-point networks but not friendly enough to cache when it comes to binarized neural networks. In this paper, we propose BitStream, a general architecture for efficient inference of BNN on CPUs. In BitStream, we propose a simple but novel computation flow for BNN. Unlike existing implementations of BNN, in BitStream, all the layers, including convolutional layers, binarization layers and pooling layers are all calculated in binary precision. Comprehensive analyses demonstrate that our proposed computation flow consumes less memory during inference of BNN, and it’s friendly to cache because of its continuous memory access.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call