Abstract
Large SRAMs are the practical bottleneck to achieve a low supply voltage, because they suffer from process variation-induced bit errors at a low supply voltage. In this paper, we present an error-resilient cache architecture that resolves the drawback of previous approaches, i.e., the performance degradation at a low supply voltage which is caused by cache misses in accesses to faulty resources. We utilize cache access locality and error-free resources in a cost-effective manner. First, we classify cache lines into fully and partially accessed groups and apply appropriate methods to each group. For the partially accessed group, we propose a method of matching memory access behavior and error locations with intra-cache line word-level remapping. In order to reduce the area overhead used to store the cache access information history, we present an access pattern-learning line-fill buffer (LFB). For the fully accessed group, we propose the utilization of error-free assist functions in the cache, i.e., a LFB and victim cache with no process variation-induced error at the target minimum supply voltage. We also present an error-aware prefetch method that allows us to utilize the error-free victim cache to achieve a further reduction in cache misses due to faulty resources. Experimental results show that the proposed method gives an average 32.6% reduction in cycles per instruction at an error rate of 0.2% with a small area overhead of 8.2%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.