Abstract
The problem of precisely computing the worst-case blocking time that tasks may experience is one of the fundamental issues of schedulability analysis of real-time applications. While exact methods have been proposed for more sophisticated protocols, the problem is indeed complex in case of the Priority Inheritance Protocol, even restricting the attention to uniprocessor systems, non-nested resource accesses, and non-self-suspending tasks. Besides a very simple method leading in general to loose upper bounds, only one algorithm of exponential complexity has been so far reported in literature to tighten such bounds. In this article, we describe a novel approach which, leveraging an operational research technique for modeling the problem, computes the same tight bounds in polynomial time. We then discuss the scenarios in which, assuming no conditional statements in the tasks’ code, the computed bounds derive from an actually impossible blocking chain, and we refine the initial model to more precisely compute the worst-case blocking times for any task set in any possible operating condition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.