Abstract

Fabricating circuits that employ ever-smaller transistors leads to dramatic variations in critical process parameters. This in turn results in large variations in execution/access latencies of different hardware components. This situation is even more severe for memory components due to minimum-sized transistors used in their design. Current design methodologies that are tuned for the worst case scenarios are becoming increasingly pessimistic from the performance angle, and thus, may not be a viable option at all for future designs. This paper makes two contributions targeting on-chip data caches. First, it presents an adaptive cache management policy based on nonuniform cache access. Second, it proposes a latency compensation approach that employs several circuit-level techniques to change the access latency of select cache lines based on the criticalities of the load instructions that access them. Our experiments reveal that both these techniques can recover significant amount of the lost performance due to worst case designs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.