Abstract

Traditionally, researchers have attempted to address the memory wall by building a deep memory hierarchy. Another solution is to move computation closer to memory, which is often referred to as processing in memory (PIM). Past PIM solutions tried to move computing logic near memory by integrating DRAM with a logic die using 3D stacking. This helps reduce data movement energy and increase bandwidth; however, the functionality and design of memory itself remains unchanged. An even more exciting technology is one that dissolves the line that distinguishes memory from computational units. Nearly three-fourths of silicon in processor and main memory dies is simply to store and access data. Harnessing this silicon area by repurposing it to perform computation can lead to massively parallel computational processing. Furthermore, we naturally save the vast amounts of energy spent in shuffling data back and forth between computational and storage units, and memory bandwidth becomes a meaningless metric.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call