Abstract

Computing-in-memory (CIM) architecture performs data storage and computing in the memory, which can reduce the burden of large amounts of data migration and achieve high energy efficiency and high throughput characteristics. Since the layer structure such as input feature size, channel depth and kernel size varies significantly in DNNs, fixed synaptic array size and mapping technique cannot fit well through all the DNN layers. In this paper, we make a layer-wise exploration of synaptic array and weight mapping on heterogeneous tile-based RRAM CIM architecture. Based on the channel depth and kernel size of each DNN layer, our methodology analyzes its best tile architecture and weight mapping to increase both memory utilization and system throughput. Experiment results show that the proposed heterogeneous CIM tile architecture and weight mapping method fully utilize the synaptic arrays.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call