Abstract

Deep convolutional neural networks (DCNNs) are one of the most promising models for pattern recognition and classification tasks. With the development of wearable devices and the Internet of Things (IoTs), integrating DCNNs onto embedded and portable devices is becoming more and more desirable. However, it is hard to deploy large-scale DCNNs that consume huge power and need many hardware resources in embedded devices with limited power and resources. Previous studies propose that stochastic computing (SC) can replace the resource-consuming binary arithmetic operation in DCNN, which not only simplifies the hardware implementation of arithmetic units but also has the potential to meet the low power requirements of embedded devices. However, bit-streams in SC usually have more bits than the original binary numbers, which inevitably leads to greater storage pressure. To overcome these limitations, in this work, we use Multi-Level Cell (MLC) Phase Change Memory (PCM) which has very low leakage power and high density to replace dynamic random access memory (DRAM) as the weight storage of DCNN. We design SC-PCM, an MLC PCM optimization technology dedicated to SC, which optimizes the write latency and power consumption of MLC PCM. We propose an effective layer-wise multi-precision SC-DCNN model, which reduces the scale of the neural network without sacrificing the accuracy of the DCNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call