Abstract

As the gap between processor speed and memory speed grow, so the performance penalty of instruction cache misses gets higher. Instruction cache prefetching is a technique to reduce this penalty. The prefetching methods determine the target line to be prefetched generally based on the current fetched line address. However, as the cache line becomes wider, it may contain multiple branches. This is a hurdle which must be overcome. The authors have developed a new instruction cache prefetching method in which the prefetch is directed by the prediction on branches, called branch instruction based (BIB) prefetching; in which the prefetch information is recorded in an extended BTB. Simulation results show that for commercial benchmarks, BIB prefetching outperforms traditional sequential prefetching by 7% and other prediction table based prefetching methods by 17% on average. As BTB designs become more sophisticated and achieve higher hit and accuracy ratios, BIB prefetching can achieve a higher level of performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call