Multi-chip Graphics Processing Unit (GPU) systems are critical to scale performance beyond a single GPU chip for a wide variety of important emerging applications. A key challenge for multi-chip GPUs, though, is how to overcome the bandwidth gap between inter-chip and intra-chip communication. Accesses to shared data, i.e., data accessed by multiple chips, pose a major performance challenge as they incur remote memory accesses possibly congesting the inter-chip links and degrading overall system performance. This article characterizes the shared dataset in multi-chip GPUs in terms of (1) truly versus falsely shared data, (2) how the shared dataset scales with input size, (3) along which dimensions the shared dataset scales, and (4) how sensitive the shared dataset is with respect to the input’s characteristics, i.e., node degree and connectivity in graph workloads. We observe significant variety in scaling behavior across workloads: some workloads feature a shared dataset that scales linearly with input size, whereas others feature sublinear scaling (following a \(\sqrt {2}\) or \(\sqrt [3]{2}\) relationship). We further demonstrate how the shared dataset affects the optimum last-level cache organization (memory-side versus SM-side) in multi-chip GPUs, as well as optimum memory page allocation and thread scheduling policy. Sensitivity analyses demonstrate the insights across the broad design space.
Read full abstract