Abstract

The articles in this special section focuses on in-memory computing. Computer designers have traditionally separated the role of storage and compute units. Memories and caches stored data. Processors’ logic units computed them. Is this separation necessary? A human brain does not separate the two so distinctly. Why should a processor? In-/near-memory computing paradigm blurs this distinction and imposes the dual responsibility on memory substrates: storing and computing on data. Modern processors and accelerators have over 90% of their aggregate silicon area dedicated to memory. In-/near-memory processing converts these memory units into powerful allies for massively parallel computing, which can accelerate a plethora of applications including neural networks,

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call