Abstract

Current technology continues providing smaller and faster transistors, so processor architects can offer more complex and functional ILP processors, because manufacturers can fit more transistors on the same chip area. As a consequence, the fraction of chip area reachable in a single clock cycle is dropping, and at the same time the number of transistors on the chip is increasing. However, problems related with power consumption and heat dissipation are worrying. This scenario is forcing processor designers to look for new processor organizations that can provide the same or more performance but using smaller sizes. This fact especially affects the on-chip cache memory design; therefore, studies proposing new smaller cache organizations while maintaining, or even increasing, the hit ratio are welcome. In this sense, the cache schemes that propose a better exploitation of data locality (bypassing schemes, prefetching techniques, victim caches, etc.) are a good example.This paper presents a data cache scheme called filter cache that splits the first level data cache into two independent organizations, and its performance is compared with two other proposals appearing in the open literature, as well as larger classical caches. To check the performance two different scenarios are considered: a superscalar processor and a symmetric multiprocessor.The obtained results show that (i) in the superscalar processor the split data caches perform similarly or better than larger conventional caches, (ii) some splitting schemes work well in multiprocessors while others work less well because of data localities, (iii) the reuse information that some split schemes incorporate for managing is also useful for designing new competitive protocols to boost performance in multiprocessors, (iv) the filter data cache achieves the best performance in both scenarios.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.