Abstract

While scalable coherence has been extensively studied in the context of general purpose chip multiprocessors (CMPs), GPU architectures present a new set of challenges. Introducing conventional directory protocols adds unnecessary coherence traffic overhead to existing GPU applications. Moreover, these protocols increase the verification complexity of the GPU memory system. Recent research, Library Cache Coherence (LCC) [34, 54], explored the use of time-based approaches in CMP coherence protocols. This paper describes a time-based coherence framework for GPUs, called Temporal Coherence (TC), that exploits globally synchronized counters in single-chip systems to develop a streamlined GPU coherence protocol. Synchronized counters enable all coherence transitions, such as invalidation of cache blocks, to happen synchronously, eliminating all coherence traffic and protocol races. We present an implementation of TC, called TC-Weak, which eliminates LCC's trade-off between stalling stores and increasing L1 miss rates to improve performance and reduce interconnect traffic. By providing coherent L1 caches, TC-Weak improves the performance of GPU applications with inter-workgroup communication by 85% over disabling the non-coherent L1 caches in the baseline GPU. We also find that write-through protocols outperform a writeback protocol on a GPU as the latter suffers from increased traffic due to unnecessary refills of write-once data.

Highlights

  • Graphics processor units (GPUs) have become ubiquitous in high-throughput, general purpose computing

  • This section describes the memory system and cache hierarchy of the baseline non-coherent GPU architecture, similar to NVIDIA’s Fermi [44], that we evaluate in this paper

  • This paper presents and addresses the set of challenges introduced by GPU cache coherence

Read more

Summary

Introduction

Graphics processor units (GPUs) have become ubiquitous in high-throughput, general purpose computing. General-purpose chip multiprocessors (CMPs) regularly employ hardware cache coherence [17, 30, 32, 50] to enforce strict memory consistency models. These consistency models form the basis of memory models for high-level languages [10, 35] and provide the synchronization primitives employed by multithreaded CPU applications. This section describes the memory system and cache hierarchy of the baseline non-coherent GPU architecture, similar to NVIDIA’s Fermi [44], that we evaluate in this paper. Scalar threads are managed as a SIMD execution group consisting of 32 threads called a warp (NVIDIA terminology) or wavefront (AMD terminology)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call