Abstract

3D-stacking technology has enabled the option of embedding a large DRAM cache onto the processor. Since the DRAM cache can be orders of magnitude larger than a conventional SRAM cache, the size of its cache tags can also be large. Recent works have proposed storing these tags in the stacked DRAM array itself. However, this increases the complexity of a DRAM cache request, which now translates into multiple DRAM cache accesses (tag/data). In this work, we address how to schedule these DRAM cache accesses. We start by exploring whether or not a conventional DRAM controller will work well. We introduce two potential baseline designs and study their limitations. We then derive a set of design principles that a DRAM cache controller must ideally satisfy. Our DRAM-cache-aware (DCA) DRAM controller, that is based on these principles, consistently improves performance over various DRAM cache organizations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call