Abstract

In modern multicore systems, Last-Level Cache (LLC) is usually shared among multiple cores. Though it benefits applications by sharing and utilizing cache resources efficiently; the benefits come at the cost of increased conflict misses due to interference among applications. In shared LLC, conventionally used LRU-based cache replacement policies logically partition the cache on-demand basis. Thus, cache friendly applications sharing LLC with streaming applications, suffer due to high data demands and low reuse of streaming applications. Apart from different data locality behavior, applications also show different memory access behavior while accessing the LLC. Some applications inherently have parallel memory accesses while others have more isolated long-latency accesses. The cost of idle cycles processor spends waiting for off-chip memory accesses is shared by parallel misses. However, misses which occur in isolation hurt the performance most. This adds another dimension to application's behavior. We propose an application behavior aware cache replacement policy to manage shared LLC. The proposed policy simultaneously reduces the negative interference among applications sharing the LLC and the miss-penalty associated with each LLC miss. Evaluation on SPEC CPU2006 benchmarks shows that our replacement policy improves performance on dual-core systems and quad-core system by up to 15.9% and 23.8% respectively over SRRIP for shared LLC. It is worth to note that effectiveness of our policy improves with the increase in the number of cores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call