Abstract

A new conceptual cache, NRP (non-referenced prefetch) cache, is proposed to improve the performance of instruction prefetch mechanisms which try to prefetch both the sequential and nonsequential blocks under the limited memory bandwidth. The NRP cache is used in storing prefetched blocks that were not referenced by the CPU, while these blocks were discarded in other previous prefetch mechanisms. By storing the non-referenced prefetch blocks in the NRP cache, both cache misses and memory traffic are reduced. A prefetch method to prefetch both the sequential and the nonsequential instruction paths is designed to utilise the effectiveness of the NRP cache. The results from trace-driven simulation show that this approach provides an improvement in memory access time than other prefetch methods. Particularly, the NRP cache is more effective in a lookahead prefetch mechanism that can hide longer memory latency. Also, the NRP cache reduces 50 – 112% of the additional memory traffic required to prefetch both instruction paths. This approach can achieve both the improved memory access time and the reduced memory traffic as a cost-effective cache design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call