Abstract

Processing-in-memory (PIM) provides promising solutions to the main memory bottleneck by placing computational logic in or near memory devices to reduce data movement overheads. Recent work explored how commercial DRAM can feature digital PIM logic while meeting fab-level energy and area constraints, and showed a significant speedup in the inference time of data-intensive deep learning models. However, convolutional neural network (CNN) models were not considered as main targets for the commercial DRAM-PIM due to their compute-intensive convolution layers. Moreover, recent studies revealed that the area and power constraints on memory die prevent DRAM-PIM from competing with GPUs and specialized accelerators in accelerating them. Recently, mobile CNN models have increasingly adopted a composition of depthwise and pointwise convolutions instead of such compute-intensive convolutions to reduce computation cost without accuracy drop. In this paper, we show that 1x1 convolution can be offloaded for PIM acceleration with integrated runtime support and without any hardware or algorithm changes. We provide further speedup with parallel execution on GPU and DRAM-PIM and code generation optimizations. Our solution achieves up to 35.2% (31.6% on average) speedup for all 1x1 convolutions for mobile CNN models against GPU.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call