Abstract
Filter cache has been proposed as an energy saving architectural feature. A filter cache is placed between the CPU and the instruction cache (I-cache) to provide the instruction stream. Energy savings result from accesses to a small cache. There is however loss of performance when instructions are not found in the filter cache. The majority of the energy savings from the filter cache are due to the temporal reuse of instructions in small loops. We examine subsequent fetch addresses to predict whether the next fetch address is in the filter cache dynamically. In case a miss is predicted, we reduce miss penalty by accessing the I-cache directly. Experimental results show that our next fetch prediction reduces performance penalty by more than 91% and is more energy efficient than a conventional filter cache. Average I-cache energy savings of 31 % can be achieved by our filter cache design with around 1 % performance degradation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.