Abstract

Current DRAM-based memory systems face the scalability challenges in terms of memory density, energy consumption, and monetary cost. Hybrid memory architectures composed of emerging nonvolatile memory (NVM) and DRAM is a promising approach to large-capacity and energy-efficient main memory. However, hybrid memory systems pose a new challenge to on-chip cache management due to the asymmetrical penalty of memory access to DRAM and NVM in case of cache misses. Cache hit rate is no longer an effective metric for evaluating memory access performance in hybrid memory systems. Current cache replacement policies that aim to improve the cache hit rate are not efficient either. In this article, we take into account the asymmetry of the cache miss penalty on DRAM and NVM, and advocate a more general metric, average memory access time (AMAT), to evaluate the performance of hybrid memories. We propose a miss penalty aware LRU-based cache replacement policy (MALRU) for hybrid memory systems. MALRU is aware of the source (DRAM or NVM) of missing blocks and preserves high-latency NVM blocks as well as low-latency DRAM blocks with good temporal locality in the last level cache. The experimental results show that MALRU can improve system performance by up to 22.8% and 13.1%, compared to LRU and the state-of-the-art hybrid memory aware cache partitioning technique policy, respectively.

Highlights

  • I N-MEMORY computing is becoming increasingly popular for data-intensive applications in the big data era

  • Based on the two heuristics and cache rereference interval prediction (RRIP), we propose miss penalty aware LRUbased cache replacement policy (MALRU), an efficient cache replacement policy that keeps nonvolatile memory (NVM) blocks and near-immediate rereferenced DRAM blocks in level cache (LLC) as more as possible

  • To calculate HitRaten and HiteRated, we introduce variables αj,i and βj,i. αj,i represents the probability of the jth position of whole least recently used (LRU) stack is exactly the ith DRAM block, and βj,i represents the probability of the jth position of whole LRU stack is exactly the ith NVM block

Read more

Summary

Introduction

I N-MEMORY computing is becoming increasingly popular for data-intensive applications in the big data era. We first briefly introduce hybrid DRAM/NVM memory architectures and describe the previous cache replacement policies. Previous studies have proposed two-hybrid DRAM/NVM main memory architectures: 1) hierarchical cache/memory architecture [15]–[18] and 2) flat-addressable (single address space) memory architecture [8]–[10]. Both DRAM and NVM are attached to the memory bus, and are visible to the processors and OSes

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call