Abstract

Asynchronous pipelining is a form of parallelism that is useful in both distributed and shared memory systems. We show that asynchronous pipeline schedules are a generalization of both noniterative DAG (directed acyclic graph) schedules as well as simpler pipeline schedules, unifying these two types of scheduling. We generalize previous work on determining if a pipeline schedule will deadlock, and generalize Reiter's well-known formula for determining the iteration interval of a deadlock-free schedule, which is the primary measure of the execution time of a schedule. Our generalizations account for nonzero communication times (easy) and the assignment of multiple tasks to processors (nontrivial). A key component of our generalized approach to pipeline schedule analysis is the use of pipeline scheduling edges with potentially negative data dependence distances. We also discuss implementation of an asynchronous pipeline schedule at runtime; show how to efficiently simulate pipeline execution on a sequential processor; define and derive bounds on the startup time of a schedule, which is a secondary schedule performance measure; and describe a new algorithm for evaluating the iteration interval formula.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call