Abstract

Fully-analog in-memory computing (IMC) architectures that implement both matrix-vector multiplication and nonlinear vector operations within the same memory array have shown promising performance benefits over conventional IMC systems due to the removal of energy-hungry signal conversion units. However, maintaining the computation in the analog domain for the entire deep neural network (DNN) comes with potential sensitivity to interconnect parasitics. Thus, in this paper, we investigate the effect of wire parasitic resistance and capacitance on the accuracy of DNN models deployed on fully-analog IMC architectures. Moreover, we propose a partitioning mechanism to alleviate the impact of the parasitic while keeping the computation in the analog domain through dividing large results for a $400 \times 120 \times 84 \times 10$ DNN model deployed on a results for a $400 \times 120 \times 84 \times 10$ DNN model deployed on a fully-analog IMC circuit show that a 94.84 % accuracy could be achieved for MNIST classification application with 16,8, and 8 horizontal partitions, as well as 8,8, and 1 vertical partitions for first, second, and third layers of the DNN, respectively, which is comparable to the $\sim 97$ % accuracy realized by digital implementation on CPU. It is shown that accuracy benefits are extra circuitry required for handling partitioning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.