Abstract

Prefetching caches has been proposed as an important technique to hide and tolerate the average latency of memory accesses by exploiting the overlap of processor computations with data accesses. In this paper, we analyze a single-bus multiprocessor using Stochastic Timed Petri Net (STPN) model to study the effects of various parameters such as latency (memory and network) and degree of prefetching on speed-up of the system and the network contention. Our results indicate that fixed sequential prefetching with degree of prefetching equal to four, would improve the speed-up for medium latencies (64 pcycles with processor of 80 MHz) whenever the probability of useful prefetched data into the buffers is high, superior to 0.5.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call