Abstract

SummaryProgramming paradigms in High‐Performance Computing have been shifting toward task‐based models that are capable of adapting readily to heterogeneous and scalable supercomputers. The performance of task‐based application heavily depends on the runtime scheduling heuristics and on its ability to exploit computing and communication resources. Unfortunately, the traditional performance analysis strategies are unfit to fully understand task‐based runtime systems and applications: they expect a regular behavior with communication and computation phases, while task‐based applications demonstrate no clear phases. Moreover, the finer granularity of task‐based applications typically induces a stochastic behavior that leads to irregular structures that are difficult to analyze. Furthermore, the combination of application structure, scheduler, and hardware information is generally essential to understand performance issues. This paper presents a flexible framework that enables one to combine several sources of information and to create custom visualization panels allowing to understand and pinpoint performance problems incurred by bad scheduling decisions in task‐based applications. Three case‐studies using StarPU‐MPI, a task‐based multi‐node runtime system, are detailed to show how our framework can be used to study the performance of the well‐known Cholesky factorization. Performance improvements include a better task partitioning among the multi‐(GPU, core) to get closer to theoretical lower bounds, improved MPI pipelining in multi‐(node, core, GPU) to reduce the slow start, and changes in the runtime system to increase MPI bandwidth, with gains of up to 13% in the total makespan.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call