This paper describes the distributed pipeline scheduling framework that provides a systematic approach to designing distributed, heterogeneous real-time systems. This paper formalizes distributed pipelining scheduling by providing a set of abstractions and transformations to map real-time applications to system resources, to create highly efficient and predictable systems, and to decompose the very complex multiresource system timing analysis problem into a set of simpler application stream and single-resource schedulability problems to ascertain that all real-time application timing requirements are met. Distributed pipeline scheduling includes support for distributed, heterogeneous system resources and diverse local scheduling policies, global scheduling policies for efficient resource utilization, flow-control mechanisms for predictable system behaviour, and a range of system reconfiguration options to meet application timing requirements. An audio/video example is used in this paper to demonstrate the power and utility of distributed pipeline scheduling. 1. INTRODUCTIO N It is not uncommon for serious performance problems to arise in the development of distributed, heterogeneous real-time systems. Part of the problem has been due to the lack of a systematic approach for designing these systems. Discrete-event simulation and timeline construction approaches tend to scale poorly, with the complexity of the model approaching the complexity of the system under development. These models become unwieldy, insufficiently maintained and are then discarded. As a result, the performance properties of the final system diverge greatly from initial goals. This paper presents a framework, denoted the distributed pipeline scheduling framework, for designing distributed, heterogeneous real-time systems that execute in a pipelined, efficient and predictable manner. Furthermore, analysis techniques presented in the framework decompose the very complex multi-resource system-timing analysis problem into a set of simpler single-resource, single-application stream problems that can accurately predict the performance properties of the resulting system design and can be used to determine that all real-time application timing requirements are met. If application timing requirements are not met, the framework also delineates how manipulating system configuration parameters can attempt to meet the timing requirements. Figure 1 provides an overview of the distributed pipeline scheduling framework. Each logical application stream model (LASM) and optional parameters list (OP) capture a real-time application's timing and processing requirements. The LASM is composed of a set of precedence-constraint processing steps. Each processing step is an unit of execution on any resource, e.g. a thread is a processing step executing on a processor-type resource. The target platform model (TPM) represents a connected set of system resources, each of which can be found in the library of single-resource scheduling models.
Read full abstract