Abstract

State-of-the-art chip multiprocessor (CMP) proposals emphasize general optimizations designed to deliver computing power for many types of applications. Potentially, significant performance improvements that leverage application-specific characteristics such as data access behavior are missed by this approach. In this paper, we demonstrate how scalable and high-performance parallel systems can be built by classifying data accesses into different categories and treating them differently. We develop a novel compiler-based approach to speculatively detect a data classification termed practically private, which we demonstrate is ubiquitous in a wide range of parallel applications. Leveraging this classification provides efficient solutions to mitigate data access latency and coherence overhead in today’s many-core architectures. While the proposed data classification scheme can be applied to many micro-architectural constructs including the TLB, coherence directory, and interconnect, we demonstrate its potential through an efficient cache coherence design. Specifically, we show that the compiler-assisted mechanism reduces an average of 46% coherence traffic and achieves up to 12%, 8%, and 5% performance improvement over shared, private, and state-of-the-art NUCA-based caching, respectively, depending on scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.