Abstract

The cache hierarchy of current multicores typically consists of three levels, ranging from the faster and smaller L1 level to the slower and larger L3 level. This approach has been demonstrated to be effective in high performance processors, since it reduces the average memory access time. However, when implemented in devices where energy efficiency becomes critical, like low power or embedded processors, conventional cache hierarchies may present some concerns. These concerns, which incur a waste of area and energy, are multiple cache lookups, block replication, block migration and private cache space overprovisioning. To deal with these issues, in this work we propose FOS-Mt, a new cache organization aimed at addressing energy savings in current multicores for multithreaded applications. FOS-Mt’s cache hierarchy consists of only two levels: the L1 cache level located in the core pipeline, and a single and flattened second level which conforms an aggregated cache space which is accessible by all the execution cores. This level is sliced into multiple small buffers, which are dynamically assigned to any of the running thread when they are expected to improve the system performance. Those buffers that are not allocated to any core are powered off to save energy. Experimental results show that FOS-Mt significantly reduces both static and dynamic energy consumption over other conventional cache organizations like NUCA or shared caches with the same storage capacity. Compared to the widely known cache decay approach, FOS-Mt achieves an improvement in the energy delay product by 19.3% on average. Moreover, despite the fact that FOS-Mt is an energy-aware architecture, performance is scarcely affected, since it is kept similar to that one achieved by conventional and cache decay approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call