Abstract

With the astonishing achievements of Convolutional Neural Network (CNN) accelerators in real-time applications, the deployment of CNNs on hardware has become an attractive matter. Pooling layers in CNNs are employed for reducing the computation of convolutional layers. Nevertheless, their hardware implementation strategy can impact the accuracy and performance of accelerators. This paper presents a novel parallel Stochastic Computing (SC) based architecture of pooling modules in hardware for stochastic CNN accelerators. With this approach, the SC-based average pooling is reconfigurable with 1.28 times lower power consumption, and the max pooling layer achieves area reduction with the ratio of 4.36. Extending the application of stochastic CNN accelerators in different classification problems is also achieved by implementing AAD pooling with the proposed method. Eventually, the reliability of the proposed method is approved by testing our pooling layers in the VGG-16 structure with the CIFAR-10 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call