Abstract

State-of-the-art embedded processors find their use in several domains like vision-based and big data applications. Such applications require a huge amount of information per task, and thereby need frequent main memory accesses to perform the entire computation. In such a scenario, a bigger size last level cache (LLC) would improve the performance and throughput of the system by reducing the global miss rate and miss penalty to a large extent. But this would lead to increased power consumption due to the extended cache memory, which becomes more significant for battery-driven mobile devices. Near threshold operation of memory cells is considered as a notable solution in saving a substantial amount of energy for such applications. We propose a cache architecture that takes advantage of both near threshold and standard LLC operation to meet the required power and performance constraints. A controller unit is implemented to dynamically drive the LLC to operate at standard or near threshold operating region based on application specific operations. The controller can also power gate a portion of LLC to further reduce the leakage power. By simulating different MiBench benchmarks, we show that our proposed cache architecture can reduce average energy consumption by 22% with a minimal average runtime penalty of 2.5% over the baseline architecture with no cache reconfigurability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.