Abstract
In general computing systems, a job (process/task) may suspend itself whilst it is waiting for some activity to complete, e.g., an accelerator to return data. In real-time systems, such self-suspension can cause substantial performance/schedulability degradation. This observation, first made in 1988, has led to the investigation of the impact of self-suspension on timing predictability, and many relevant results have been published since. Unfortunately, as it has recently come to light, a number of the existing results are flawed. To provide a correct platform on which future research can be built, this paper reviews the state of the art in the design and analysis of scheduling algorithms and schedulability tests for self-suspending tasks in real-time systems. We provide (1) a systematic description of how self-suspending tasks can be handled in both soft and hard real-time systems; (2) an explanation of the existing misconceptions and their potential remedies; (3) an assessment of the influence of such flawed analyses on partitioned multiprocessor fixed-priority scheduling when tasks synchronize access to shared resources; and (4) a discussion of the computational complexity of analyses for different self-suspension task models.
Highlights
Complex cyber-physical systems have timeliness requirements such that deadlines associated with individualReal-Time Systems (2019) 55:144–207 computations must be met
It is unfortunate that many of the early results in this area were flawed, we hope that this review will serve as a solid foundation for future research on self-suspensions in real-time systems
Example 3: hardware acceleration by using co-processors and computation offloading In many embedded systems, selected portions of programs are preferably executed on dedicated hardware co-processors to satisfy performance requirements. Such co-processors include for instance application-specific integrated circuits (ASICs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs), graphics processing units (GPUs), etc
Summary
Complex cyber-physical systems (i.e., advanced embedded real-time computing systems) have timeliness requirements such that deadlines associated with individual. The seminal work by Liu and Layland (1973) considers the scheduling of periodically triggered computations, which are usually termed tasks The analysis they presented enables the schedulability of a set of such tasks to be established, i.e., whether their deadlines will be met at run-time. One underlying assumption of the majority of these schedulability analyses is that a task does not voluntarily suspend its execution—once executing, a task ceases to execute only as a result of either a preemption by a higher-priority task, becoming blocked on a shared resource that is held by a lower-priority task on the same processor, or completing its execution (for the current activation of the task) This is a strong assumption that lies at the root of Liu and Layland’s seminal analysis (Liu and Layland 1973), as it implies that the processor is contributing some useful work (i.e., the system progresses) whenever there exist incomplete jobs in the system (i.e., if some computations have been triggered, but not yet completed). The remainder of this section provides more background and motivation of general self-suspension and
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.