Abstract

Heterogeneous memory systems promise to provide both large storage capacity and high performance. Prior DRAM cache systems have metadata scalability issue and suffer from low cache hit rate, low DRAM space utilization, and significant data migration overhead. We observe that instances of an object type exhibit stable and predictable memory access patterns in its constituent cachelines and these patterns are referred to as object fingerprints. We propose the hardware-assisted cache that manages DRAM at the object type level and fetches data block at the granularity of a cacheline, by exploiting object fingerprint. To address its design challenges, we first present a software-hardware co-design to convey the software information to hardware. Second, we design multiple granularity sector caches that can be dynamically adjusted to adapt to changing behaviors and improve DRAM cache utilization. To address the challenges of large metadata storage overhead, we propose to bound possible sizes for each sector cache. Experimental results show our designs improve DRAM cache hit rate by 21.6%, boost IPC by 19.8%, and reduce data migration traffic by 51.6% on average, compared with state-of-art DRAM caches. More importantly, our online object fingerprint learning method is 2.3% inferior to the offline one in terms of IPC.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.