Abstract

This paper introduces analyses of write-back caches integrated into response-time analysis for fixed-priority preemptive and non-preemptive scheduling. For each scheduling paradigm, we derive four different approaches to computing the additional costs incurred due to write backs. We show the dominance relationships between these different approaches and note how they can be combined to form a single state-of-the-art approach in each case. The evaluation explores the relative performance of the different methods using a set of benchmarks, as well as making comparisons with no cache and a write-through cache. We also explore the effect of write buffers used to hide the latency of write-through caches. We show that depending upon the depth of the buffer used and the policies employed, such buffers can result in domino effects. Our evaluation shows that even ignoring domino effects, a substantial write buffer is needed to match the guaranteed performance of write-back caches.

Highlights

  • During the last two decades, applications in aerospace and automotive electronics have progressed from deploying embedded microprocessors clocked in the 10’s of MHz range to higher performance devices operating in the 100’s of MHz to GHz range

  • We showed how to account for the costs of using a write-back cache in response-time analysis for fixed-priority preemptive and fixed-priority non-preemptive scheduling

  • We introduced the concepts of Dirty Cache Blocks (DCBs), and Final Dirty Cache Blocks (FDCBs) and classified the different types of write back which can occur due to a task’s internal behaviour, carry-in effects from previously executing tasks, and preemption effects

Read more

Summary

Introduction

During the last two decades, applications in aerospace and automotive electronics have progressed from deploying embedded microprocessors clocked in the 10’s of MHz range to higher performance devices operating in the 100’s of MHz to GHz range. We are interested in the behaviour of single-level data and unified caches. The behaviour of these caches is crucially dependent on the write policy used. In caches using the write-back policy, writes are not immediately written back to memory. Upon eviction of a dirty cache line are its contents written back to main memory. This has the potential to greatly reduce the overall number of writes to main memory compared to a write-through policy, as multiple writes to the same location and multiple writes to different locations in the same cache line can be consolidated

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call