Abstract

In a chip-multiprocessor with a shared cache structure , the competing accesses from different applications degrade the system performance.The accesses degrade the performance and result in non-predicting executing time. Cache partitioning techniques can exclusively partition the shared cache among multiple competing applications. In this paper, the authors design the framework of Process priority-based Multithread Cache Partitioning(PP-MCP),a dynamic shared cache partitioning mechanism to improve the performance of multi-threaded multi-programmed workloads. The framework includes a miss rate monitor , called Application-oriented Miss Rate Monitor (AMRM) , which dynamically collect s miss rate information of multiple multi-threaded applications on different cache partitions , and process priority-based weighted cache partitioning algorithm ,which extends traditional miss rate oriented cache partition algorithms.The algorithm allocates Cache in sequence of the value of the process priority and it ensures that the highest priority process will get enough Cache space; and the applications with more threads tend to get more shared cache in order to improve t he overall system performance. Experiments show that PP-MCP has better IPC throughput and weighted speedup. Specifically , for multi-threaded multi-programmed scientific computing workloads , PP-MCP-1 improves throughput by up to 20% and on average 10 % over PP-MCP-0.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call