To successfully engineer a large scale real-time system project, we need a disciplined approach to both the logical complexity and the timing complexity inherent in these systems. The logical complexity is managed by the software engineering methodology embodied in Ada while the timing complexity is supported by formal real-time scheduling algorithms [3, 4, 5, 6]. From a software engineering point of view, formal scheduling algorithms translate the complex timing constraints into simple resource utilization constraints. As long as the utilization constraints of CPU, I/O channel and communication media are observed, the deadlines of periodic tasks and the response time requirements of aperiodic tasks will both be met. There is considerable freedom to modify software provided that the resource utilization remains within specified bounds. Furthermore, should there be a transient overload, the number of tasks missing their deadlines will be a function of the magnitude of the overload and the order in which the tasks miss deadlines is pre-defined [7]. From an implementation point of view, these algorithms belong to the class of static priority algorithms which can be implemented efficiently. Task priorities are functions of the timing constraints, computation requirements and relative importance of tasks. The priorities need not be computed at run-time unless task parameters are modified. A principal obstacle to the implementation of these formal real-time scheduling algorithms is priority inversion, which was recognized in the previous Ada Real-Time Workshop as a serious problem that must be corrected [1]. Priority inversion is any situation in which a lower priority task holds a resource while a higher priority task is ready to use it. Four changes are needed to bound the waiting time of a higher priority task as a function of the rendezvous durations of shared lower priority servers, and not as a function of the execution times of lower priority tasks. The queue for a task accept must not be managed as a FIFO queue, but rather as a priority queue, ordered according to the priorities of the tasks making the calls. Thus, scheduling of the resulting rendezvous will not result in priority inversion. An unguarded select clause governing a set of task accepts must select first the accept whose queue currently contains the highest priority calling task. A task must always run at the greater of its own priority or the priority of the highest priority task on any of its entry queues. This priority inheritance must be transitive in the event that a chained sequence of task entry calls are made. The task priorities must be modifiable at run time in order to respond to changes in the application response time requirements (e.g., due to a system mode change, a sensor must increase its sampling rate). Of these changes, 1, 2, and 3 were implied by the Steelman requirement that the CPU resource must be scheduled “first-in-first-out within priorities”. Change 4 was specifically required by the Steelman scheduling requirement, but omitted in the Ada language definition. A theoretical investigation [8] has showed that if these changes are made, a higher priority task can be blocked by lower priority tasks at most min ( m , n ) times in a processor, where m is the number of servers that it will share with lower priority tasks and n is the number of lower priority tasks. In the same study, it has been shown that in a uniprocessor environment, if application tasks can be properly structured according to certain rules, then the priority ceiling protocol , which adds one additional change to the above list, would lead to freedom from deadlock, and bound the blocking durations by lower priority tasks to at most once. While these theoretical findings are interesting in their own right, it is imperative that we examine their actual behavior experimentally, because the assumptions made by the theory can only be approximated in practice. To investigate the effects of the intrinsic Ada priority inversion and its control, we conducted a set of experiments with an existing Ada run-time environment which has been modified in accordance with changes 1, 2, and 3 described above. Results of this experimentation using the rate monotonic algorithm are presented in the next section. We believe this subject is of critical importance to all users of Ada for embedded real-time systems, particularly as the existing Ada development tools mature and their inefficiencies, particularly in task and rendezvous management, are removed.