Abstract

Improving cache performance is a lasting research topic. While utilizing data locality to enhance cache performance becomes more and more difficult, data access concurrency provides a new opportunity for cache performance optimization. In this work, we propose a novel concurrency-aware cache management framework that outperforms state-of-the-art locality-only cache management schemes. First, we investigate the merit of data access concurrency and pinpoint that reducing the miss rate may not necessarily lead to better overall performance. Next, we introduce the pure miss contribution (PMC) metric, a lightweight and versatile concurrency-aware indicator, to accurately measure the cost of each outstanding miss access by considering data concurrency. Then, we present CARE, a dynamic adjustable, concurrency-aware, low-overhead cache management framework with the help of the PMC metric. We evaluate CARE with extensive experiments across different application domains and show significant performance gains with the consideration of data concurrency. In a 4-core system, CARE improves IPC by 10.3% over LRU replacement. In 8 and 16-core systems where more concurrent data accesses exist, CARE outperforms LRU by 13.0% and 17.1%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call