Abstract
Deep Neural Networks (DNNs) have transformed the field of machine learning (ML) and are widely deployed in many applications involving image, video, speech and natural language processing. The increasing compute demands of DNNs have been widely addressed through Graphics Processing Units (GPUs) and specialized accelerators. However, as model sizes grow, these von Neumann architectures require very high off-chip memory bandwidth to keep the processing elements utilized, as a majority of the data resides in the main memory. Processing in memory is actively being explored as a promising solution to the memory wall bottleneck for ML workloads. In this work, we propose a new DRAM-based processing-in-memory (PIM) multiplication primitive coupled with intra-bank accumulation to accelerate matrix vector multiply operations in ML workloads. The proposed multiplication primitive adds <1% area overhead and does not require any change to the DRAM peripherals. Subsequently, we design a DRAM-based PIM architecture (PIM-DRAM) and a data mapping scheme for executing DNNs on the proposed architecture. System evaluations performed on the AlexNet, VGG16 and ResNet18 DNNs show that the proposed architecture, mapping, and data flow can provide up to 19.5x speedup over an NVIDIA Titan Xp GPU, highlighting the potential of processing in memory for future generations of DNN hardware.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Journal on Emerging and Selected Topics in Circuits and Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.