Abstract

Worst-case execution time (WCET) analysis of systems with data caches is one of the key challenges in real-time systems. Caches exploit the inherent reuse properties of programs by temporarily storing certain memory contents near the processor, in order that further accesses to such contents do not require costly memory transfers. Current worst-case data cache analysis methods focus on specific cache organizations (set-associative LRU, locked, ACDC, etc.), most of the times adapting techniques designed to analyze instruction caches. On the other hand, there are methodologies to analyze the data reuse of a program, independently of the data cache. In this paper we propose a generic WCET analysis framework to analyze data caches taking profit of such reuse information. It includes the categorization of data references and their integration in an IPET model. We apply it to a conventional LRU cache, an ACDC, and other baseline systems, and compare them using the TACLeBench benchmark suite. Our results show that persistence-based LRU analyses dismiss essential information on data, and a reuse-based analysis improves the WCET bound around 17% in average. In general, the best WCET estimations are obtained with optimization level 2, where the ACDC cache performs 39% better than a set-associative LRU.

Highlights

  • Real-time systems are increasingly present in industry and daily life

  • In this paper we propose a generic framework for analyzing the worst-case execution time (WCET) of binary programs in a system with data cache

  • For the LRU data cache we study both a persistence-based analysis and a reuse-based analysis, and for the ACDC we propose an heuristic method to obtain a good configuration of its data replacement permissions

Read more

Summary

Introduction

Real-time systems are increasingly present in industry and daily life. We can find examples in many sectors including avionics, robotics, automotive processes, manufacturing, and air-traffic control. A memory hierarchy made up of one or more cache levels exploits program reuse and saves execution time and energy consumption by delivering data and instructions with an average latency of a few processor cycles instead of requiring costly memory transfers. Cache designs are ubiquitous in contemporary processors, many details regarding them are still ignored in the WCET analysis, and single-level LRU (Least Recently Used) instruction caches are still an open issue [1] This situation is even worse for data caches, since writing policies must be modeled. The interaction between the code and the data cache is much more complex than with the instruction cache This can be seen in common scenarios such as loops, function calls, and execution-time address computation. Memory instructions accessing local variables use stack frames, whose base address depends, among other things, on the nesting level. If the reference cannot be described as h⃗ ⋅ ⃗i + c, it is non-linear

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call