Abstract

The large working sets of commercial and scientific workloads favor a shared L2 cache design that maximizes the aggregate cache capacity and minimizes off-chip memory requests in chip multiprocessors (CMP). There are two important hurdles that restrict the scalability of these chip multiprocessors: the on-chip memory cost of directory and the long L1 miss latencies. This work presents a network victim cache architecture aimed at facing these two important problems. Our proposal takes advantage of on-chip networks to manage shared caches in chip multiprocessors. The network victim cache architecture removes the directory structure from shared L2 caches and stores directory information for the blocks recently cached by L1 caches in the network interface components decreasing on-chip directory memory overhead and improves the scalability. The saved memory space is used as victim caches which are embedded into the network interface components to reduce L1 miss latencies further. The proposed architecture is evaluated based on simulations of a 16-core tiled CMP. Results demonstrate that the network victim cache architecture provides better scalability and improves performance by 23% on average against over the traditional CMP with shared L2 cache design, and up to 34% at best.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call