Abstract

Caches, data path, and burst transfer memory are the major hardware techniques used to reduce the latency between the processor and the main memory. We explore the design space among the hit ratio (hence a cache size, or an improved cache structure), data path width, and the transfer memory design through a performance tradeoff methodology. For the tradeoffs among these three factors, our evaluation shows that if a D‐byte data path system and a 2D‐byte data path system have the same performance, then the hit ratio difference that trades the performance of a D‐byte wide data path is between 0 (low bound) and 1‐HR (high bound) where HR is the hit ratio associated with the D‐byte system. For current main memory systems, doubling the data path trades about half of the high bound of the hit ratio traded in a transfer‐time dominated system. Doubling the data bus is more advantageous when the processor is designed with the use of a high‐speed non‐constant‐time‐dominated L2 cache. Doubling the bus width trades a large percentage of the hit ratio when a large amount of non‐cacheable 2D‐byte memory traffic exists.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call