Abstract

Effective allocation of shared resources limited is a key problem for chip multiprocessors. As the processor core growth in the scale, multi thread for the shared resource limited system competition will become more intense, the performance of the system will also be more significant. In order to alleviate this problem, a fair and effective multi thread shared resources allocation scheduling algorithm is important. In all kinds of shared resources, the largest effect on the system performance is the shared cache and DRAM system. There are essential differences between the last level cache and a cache. The goal of a cache design is to provide fast data processor which requires high access speed. However, the object of the last level cache is to save data in the chip as much as possible, and the access speed requirements are not too high, it is more subject to the plate number of available transistors. Management level cache LRU strategy and its approximate algorithm are not applicable to the large capacity last level cache for traditional. It may cause destructive interference between threads, cache thrashing of stream media program lead, which will lead to a decline in the performance of processor. This paper focuses on the analysis of some hot problems of the last level cache management in the process of the large capacity of multi-core platform sharing, and puts forward the corresponding costs less.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call