Abstract

Although the Least Recently Used (LRU) policy is known as a simple but high-performance cache replacement policy, high-associativity caches hardly adopt the LRU policy because of an increase in the hardware overheads. The Re-Reference Interval Prediction (RRIP) policy [3] is one of the high-performance policies that can suppress the hardware overheads. However, the RRIP policy cannot improve the performance when it is employed in higher-level caches, and in fact, the RRIP policy causes a significant performance degradation in the execution of several applications. This is because the RRIP policy controls the priority of the block without considering the priorities of all the blocks in the set at cache misses. In several applications, it causes the priorities of the existing blocks are minimized or unchanged. To avoid this problem, this paper proposes a cache replacement policy named Adaptive Demotion Policy (ADP). This policy focuses on a subtraction value, which is subtracted from the priority value of each block at cache misses. According to the level of the cache hierarchy, ADP uses the half of average or the average of the priority values of all the blocks in the set as the subtraction value. This prevents that the priorities of the existing blocks are minimized or unchanged. Besides, ADP is suitable for various applications by the appropriate selection of its insertion, promotion and selection policies. The evaluation results show that ADP can be implemented with fewer hardware overheads compared with the LRU policy. Moreover, the priority controller of ADP can operate faster than that of the LRU policy for high-associativity caches. The performance evaluation shows that ADP achieves the MPKI reductions at all the levels of the cache hierarchy and the IPC improvements, compared with the LRU and RRIP policies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call