Abstract

When a program's working set exceeds the size of its last-level cache, performance may suffer due to the resulting off-chip memory accesses. Cache compression can increase the effective cache size and therefore reduce misses, but compression also introduces access latency because cache lines need to be decompressed before using. Cache compression can help some applications but hurt others, depending on the working set of the currently running program and the potential compression ratio. Previous studies proposed techniques to dynamically enable compression to adapt to the program's behavior. In the context of shared caches in multi-cores, the compression decision becomes more interesting because the cache is shared by multiple applications that may benefit differently from a compressed cache. This paper proposes Thread-Aware Dynamic Cache Compression (TADCC) to make better compression decisions on a per-thread basis. Access Time Tracker (ATT) can estimate the access latencies of different compression decisions. The ATT is supported by a Decision Switching Filter (DSF) that provides stability and robustness. As a result, TADCC outperforms a previously proposed adaptive cache compression technique by 8% on average and as much as 17%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call