Abstract

In multicore systems with shared cache, multiple tasks run on multiple cores simultaneously and compete for the shared cache. Cache interference occurs when a task running on one core replaces cache data belonging to other tasks running on other cores. With today's multicore systems running tasks with different priorities, the need for providing QoS guarantees on cache usage is gaining importance. Solutions to reduce cache interference and provide cache QoS mainly used a cache partitioning approach to split the cache among different cores. The solutions were implemented and validated only on simulators and not on real systems. This paper discusses new techniques that are used on real systems to (1) experimentally measure the amount of interference caused by multiple coscheduled programs, (2) reduce interference miss rate of some programs at the expense of others and (3) provide cache QoS guarantees to programs and ensure their miss rates remain below a ceiling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call