Abstract

In this paper we discuss the needs for load balancing, also called scheduling. We exhibit different reasons that render static (compile-time) scheduling impossible and that determine the dynamic (run-time) load balancing schemes needed in order to get efficient parallel algorithms One distinguishes between local load balancing policies where processors base their decisions on information about the load in some neighborhood and global load balancing policies where processors base their decisions on the load of the entire machine. Depending on the static information available and on the dependencies between the different tasks, some parallel algorithms accommodate with simple load balancing or load sharing mechanisms while others need more sophisticated solutions. The former are typically local while the later are global load balancing schemes. In particular, we analyze the branch and bound algorithm and show that it needs smart load balancing mechanisms ideally founded on global knowledge. We argue that for this algorithm a global load balancing policy may be interesting. Indeed, the best-first branch and bound algorithm can be defined as a sequence of independent computations allowing the design of a parallel algorithm that alternates between coarse grained parallel computation phases and so-called synchronization phases which provide perfect global load balancing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call