Abstract

Crossbar-enabled analog computing-in-memory (CACIM) systems can significantly improve the computation speed and energy efficiency of deep neural networks (DNNs). However, an important issue is that the performance of DNNs degrades severely when deploying the DNNs onto the CACIM systems. Because the devices in the CACIM systems have low precision to present the weights, which is caused by the intrinsic variation and high programming overhead. The computational paradigms of the CACIM systems and the digital systems are essentially different. One of the main differences is that the weights are expressed in analog terms, and it has no encoding and decoding process during the computation. We can take advantage of the characteristic of data presentation to get better performance in limited data precision. A generalized quantization method that does not constrain the range of quanta and can obtain less quantization error will be effective in the CACIM systems. For the first time, we introduced a generalized quantization method into CACIM systems and showed superior performance on a series of computer vision tasks, such as image classification, object detection, and semantic segmentation. Using the generalized quantization method, the DNN with 8-level analog weights can outperform the 32-bit networks. With fewer levels, the generalized quantization method can obtain less accuracy loss than other uniform quantization methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.