Abstract

Am odel is described and analyzed for a multiprocessor shared memory system in which each memory bank can service a fixed number of access requests per cpu cycle. If n processors simultaneously request data from a common shared memory, it is usually not possible for all the requests to be satisfied at the same time. This is because the memory system usually places a limit on the number of requests that it can service at any given time. A typical method for allowing multiple requests to be satisfied is to divide the memory into m banks each capable of satisfying requests independently. The memory banks are usually interleaved, so that requests to successive memory locations will be serviced by successive memory banks. Even if m = n, full memory bus bandwidth cannot usually be achieved. This is because of the fact that if n processors each make a memory request at random, it is unlikely that the n requests will all be to different memory banks. In fact, the probability that the n requests go to all n banks is just the number of permutations of Zn divided by the number of mappings of Zn into Zn, where Zn represents the set of integers from 1 to n. This probability is n! n n which, as n gets large, approaches √ 2πn/e n by Stirling's Formula. One solution is to make m>n . Since the cost increases with the number of memory banks, it is important to know how the average memory bandwidth will be affected by the values of m and n. Several mathematical models have appeared in the literature to predict the behavior of such systems (1-5,8,11-13). The model that is closest to the behavior of the machine we are interested in (the Alliant FX/8 multiprocessor) is the model that Hoogendoorn (8) calls the Uniform Static-Access Matrix. In this model, each processor accesses each memory bank with equal probability (1/m). The memory requests are considered to be random and independent. If a memory bank gets more than one request, it services precisely one of the requests. The other requests are rejected and are not queued up. There is no assumption that the requesting processor will make the same request on the next cpu cycle. (This is clearly a deficiency in this model.) Ravi (13) showed that the expected number of requests that are serviced is

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call