Abstract

As memory latencies increase, the importance of cache performance improvements at each level of the memory hierarchy will continue to grow. Also, as the available chip area grows, it makes sense to spend more resources to allow intelligent control over the cache management, in order to adapt the caching decisions to the dynamic accessing behavior. In the past, cache management techniques such as cache bypassing were implemented manually at the instruction programming level. Additionally, spatial locality was often exploited via large block sizes and other fixed amounts of hardware prefetching. The goal is to develop a framework for adaptive and automatic control of cache management techniques. The objective of the research is to improve cache effectiveness in order to deal with long memory latencies, utilizing run-time adaptive cache management techniques, optimizing both performance and cost of implementation. Specifically, the authors are aiming to increase data cache effectiveness for integer programs. They propose a microarchitecture scheme where the hardware determines data placement based on dynamic referencing behavior. This scheme is fully compatible with existing instruction set architectures. Initial studies show that run-time adaptive cache management can significantly improve the overall performance of integer applications. The improvements are due to increased cache hit rates and reduced cache miss handling latencies. However, there is still a large amount of potential improvement available in the cache hit ratios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call