Abstract

Scheduling computations with communications is the theoretical basis for achieving efficient parallelism on distributed memory systems. We generalize Graham's task-level in a manner to incorporate the effects of computation and communication. A new scheduling is proposed by combining task priority with efficient management of processor idle time. We also propose an optimization called Iterative Refinement Scheduling (IRS) that iteratively schedules the forward and backward computation graph. The task-level used in some scheduling iteration is obtained from the schedule generated in the previous iteration. Each iteration produces a new schedule and new task-levels. This approach enables searching and optimizing solutions as the result of using more refined task-level in each scheduling iteration. Evaluation and analysis of the results are carried out for different instances of communication granularities and problem parallelism. It is shown that solutions obtained out of few iterations statistically outperforms those generated by other recently proposed scheduling. IRS allows exploring a space of solutions whose size grows with the amount of parallelism and communication granularity. IRS enables optimizing the solution specially for critical instances such as fine-grain computations and large parallelism.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.