Abstract
Resizable caches can tradeoff capacity for access speed to dynamically match the needs of the workload. In single-threaded cores, resizable caches adapt to the phases of the running application. In Simultaneous Multi-Threaded (SMT) cores the caching needs can vary greatly across the number of threads and their characteristics, thus, offering even more opportunities to dynamically adjust cache resources to the workload. We demonstrate that the preferred control policy for data cache resizing in a SMT core changes as more threads are run. Prior results on one and two thread workloads showed cache resizing should optimize for cache miss behavior because misses typically form the critical path. In contrast, with many independent threads running, we show optimizing for cache hit behavior has more impact since large SMT workloads have other threads to fill-in during a cache miss. Furthermore, these seemingly diametrically opposed policies are closely related mathematically; the former minimizes the arithmetic mean cache access time, while the latter minimizes its harmonic mean. Our algorithm, named hybrid algorithm, smoothly and naturally adjusts between the two strategies with the degree of multithreading.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.