Abstract

In the past decade, real-time embedded systems became more and more complex and pervasive. From the user perspective, these systems have stringent requirements regarding size, performance and energy consumption, and due to business competition, their time-to-market is a crucial factor. Besides these requirements, system designers should handle the increasing dynamism that appears in resources required by modern applications, like object-based video coders. In addition, the new architectural features lately introduced in hardware platforms for increasing the average performance enlarge the gap between the average and the worst case execution time of the applications. Therefore, much work is being done in developing design methodologies for embedded systems to deal with the dynamism and to cope with the tight requirements. One of the most well known design methodologies is scenario-based design. It has been used for a long time in user-centered design approaches for different areas, including embedded systems. Scenarios concretely describe, in an early phase of the development process, the use of a future system. Usually, they appear like narrative descriptions of envisioned usage episodes, or like unified modeling language (UML) use-case diagrams which enumerate, from functional and timing point of view, all possible user actions and the system reactions that are required to meet a proposed system function. These scenarios are often called use-case scenarios. In this thesis, we concentrate on a different type of scenarios, so-called ap- plication scenarios, which may be derived from the behavior of the embedded system application. While use-case scenarios classify an application’s behavior based on the different ways the system can be used, application scenarios classify application behavior based on the cost aspects, like quality or resource usage. Application scenarios are used to reduce the system cost by exploiting information about what can happen at runtime to make better design decisions. We have developed a general methodology that can be integrated within existing embedded system design methodologies. It consists of five design time / runtime steps: (i) identification that classifies an application into scenarios; (ii) prediction that generates a runtime mechanism used to find in which scenario the application is running, (iii) exploitation that enables more specific and aggressive design decisions to be made for each scenario, (iv) switching that specifies when and how the application switches from one scenario to another, and (v) calibration that extends and modifies the scenarios and their related mechanisms, based on the runtime collected information, to further improve the system cost and quality. To prove the effectiveness of our methodology, we developed several automatic trajectories that exploit application scenarios for low energy, single processor embedded system design, under both soft and hard real-time constraints. They can automatically classify the runtime behavior of the application into several application scenarios, where the cost (in terms of required processor cycles) within a scenario is always fairly similar. Moreover, a runtime predictor is automatically derived and introduced in the application, and at runtime it is used to select and switch between scenarios, so the different optimizations used for each scenario can be enabled. All of these trajectories are applicable to streaming applications with the dynamism mostly presented in the control variables. These applications are written in C, as C is the most used language to write embedded systems software. They detect and exploit scenarios to improve the cycle budget estimation for applications, reducing the over-estimation in number and size of computation resources in comparison to existing design methods. Moreover, by integrating the application with an automatically derived predictor and using it in the context of a proactive dynamic voltage scaling (DVS) aware scheduler, the amount of used energy is reduced with no or almost no sacrifice in the resulting system quality. This can be achieved by being conservative, as required for hard real-time systems, or by using a runtime calibration mechanism, which works well for soft real-time systems. Even though all the new information about scenarios and the mechanisms introduced in the application add an extra runtime overhead, our methods keep this overhead limited and under control, and generate a final implementation of the application that has a substantial average energy saving.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call