Abstract

Deep Neural Networks (DNNs) have transformed the field of machine learning (ML) and are widely deployed in many applications involving image, video, speech and natural language processing. The increasing compute demands of DNNs have been widely addressed through Graphics Processing Units (GPUs) and specialized accelerators. However, as model sizes grow, these von Neumann architectures require very high off-chip memory bandwidth to keep the processing elements utilized, as a majority of the data resides in the main memory. Processing in memory is actively being explored as a promising solution to the memory wall bottleneck for ML workloads. In this work, we propose a new DRAM-based processing-in-memory (PIM) multiplication primitive coupled with intra-bank accumulation to accelerate matrix vector multiply operations in ML workloads. The proposed multiplication primitive adds <1% area overhead and does not require any change to the DRAM peripherals. Subsequently, we design a DRAM-based PIM architecture (PIM-DRAM) and a data mapping scheme for executing DNNs on the proposed architecture. System evaluations performed on the AlexNet, VGG16 and ResNet18 DNNs show that the proposed architecture, mapping, and data flow can provide up to 19.5x speedup over an NVIDIA Titan Xp GPU, highlighting the potential of processing in memory for future generations of DNN hardware.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call