Abstract

Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compression techniques for level 1 data cache, to avoid an increase in the cache access latency. The basic technique - all words narrow (AWN) - compresses a cache block only if all the words in the cache block are of narrow size. We extend the AWN technique to store a few upper half-words (AHS) in a cache block to accommodate a small number of normal-sized words in the cache block. Further, we make the AHS technique adaptive, where the additional half-words space is adaptively allocated to the various cache blocks. We also propose techniques to reduce the increase in the tag space that is inevitable with compression techniques. Overall, the techniques in this paper increase the average L1 data cache capacity (in terms of the average number of valid cache blocks per cycle) by about 50%, compared to the conventional cache, with no or minimal impact on the cache access time. In addition, the techniques have the potential of reducing the average L1 data cache miss rate by about 23%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call