Abstract

The cache design problem is often constrained by limiting the cache cycle time to a given CPU cycle time. However, in that, it treats as secondary the cache's large impact on overall performance. A better strategy is to choose a system cycle time that accommodates the needs of both the CPU and cache and also optimizes program execution time. The main memory access time has two components: the latency and the transfer time. In the simulation model, the latency is a fixed number of nanoseconds. Therefore, as the cycle time decreases, the number of cycles per miss penalty increases. As the miss penalty increases, a fixed change in the miss rate has a greater impact on performance. Alternatively, as the miss penalty increases, a fixed change in the miss rate is equivalent to an increasing proportion of the cycle time. So, as the cycle time decreases, an increasing fraction of a cycle that is equivalent to a change in cache size is offset by the decreasing cycle time, causing the tradeoffs between the cycle time and miss rate to be relatively constant.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call