Abstract

In a distributed computing system, it is desirable to balance the work load among processors while keeping the communication overhead at a minimum. The nearest neighbor balancing strategy requires little communication overhead compared to the sophisticated dynamic load balancing policies. The question is whether this nearest neighbor balancing policy can achieve system balance. This paper investigates a typical nearest neighbor balancing strategy, called LAL (Local Average Load), in which the work load of a processor is averaged among its nearest neighbors at discrete time steps. The underlying systems considered are multiprocessor systems interconnected by generalized hypercube (GHC), torus, and other symmetric structures. Analytical models are developed for the load balancing scheme on the considered architectures, assuming that the amount of computation tasks arriving at or finished by a processor at each time step can be described by a random variable with some general distribution. We show that the expected difference between the actual load of a processor and the average load of the system is 0 and the variance of this difference is bounded by a constant independent of time. For each considered architecture, we derive a simple closed-form formula for the variance upper bound as a function of a system size and dimenslon. Furthermore, we generalize our analytical results to any linear symmetric load balancing scheme on symmetric distributed architectures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call