Abstract

Energy has become a first class design constraint for all types of processors. Data accesses contribute to processor energy usage and can account for up to 25% of the total energy used in embedded processors. Using a set-associative level-one data cache (L1 DC) organization is particularly energy inefficient as load operations access all L1 DC tag and data arrays in parallel to reduce access latency, but the data can reside in at most one way. Techniques that reduce L1 DC energy usage at the expense of degrading performance, such as filter caches, have not been adopted. In this presentation I will describe various techniques we have developed to reduce the energy usage for L1 DC accesses without adversely affecting performance. These techniques include avoiding unnecessary loads from L1 DC data arrays and a practical data filter cache design that not only significantly reduces data access energy usage, but also avoids the traditional execution time penalty associated with data filter caches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call