Abstract

PHASTA falls under the category of high-performance scientific computation codes designed for solving partial differential equations (PDEs). Its a massively parallel unstructured, implicit solver with particular emphasis on fluid dynamics (CFD) applications. More specifically, PHASTA is a parallel, hierarchic, adaptive, stabilized, transient analysis code that effectively employs advanced anisotropic adaptive algorithms and numerical models of flow physics. In this paper, we first describe the parallelization of PHASTA's core algorithms for an implicit solve, where one of our key assumptions is that on a properly balanced supercomputer with appropriate attributes, PHASTA should continue to strongly scale on high core counts until the computational workload per core becomes insufficient and inter-processor communications start to dominate. We then present and analyze PHASTA's parallel performance across a variety of current near petascale systems, including IBM BG/L, IBM BG/P, Cray XT3, and custom Opteron based supercluster; this selection of systems with inherently different attributes covers a majority of potential candidates for upcoming petascale systems. On one hand, we achieve near perfect (linear) strong scaling out to 32,768 cores of IBM BG/L; showing that a system with desirable attributes will allow implicit solvers to strongly scale on high core counts (including petascale systems). On the contrary, we find that the relative tipping point for strong scaling fundamentally differs among current supercomputer systems. To understand the loss of scaling observed on a particular system (Opteron based supercluster) we analyze the performance and demonstrate that such a loss can be associated to an unbalance in a system attribute; specifically compute-node operating system (OS). In particular, PHASTA scales well to high core counts (up to 32,768 cores) during an implicit solve on systems with compute nodes using lightweight kernels (for example, IBM BG/L); however, we show that on a system where the compute node OS is more heavy weight (e.g., one with background processes) a loss in strong scaling is observed relatively at much fewer number of cores (4,096 cores).

Highlights

  • Introduction and contributionsPHASTA is a parallel, hierarchic (2nd to 5th order accurate), adaptive, stabilized transient analysis tool for the solution of compressible or incompressible flows

  • In this paper we do not provide a detailed description of the physical models and mathematical formulations used in PHASTA rather we focus our attention on the parallelization of PHASTA’s core algorithms for massively parallel processing and present how they scale across a variety of current near petascale systems, including IBM BG/L, IBM BG/P, Cray XT3, and custom Opteron based supercluster

  • We demonstrated that on properly balanced supercomputer systems, unstructured implicit codes are capable of achieving strong scaling on full scale of the system; we showed strong scaling out to 32,768 cores on full IBM BG/L system at CCNI, Rensselaer Polytechnic Institute (RPI)

Read more

Summary

Introduction and contributions

PHASTA is a parallel, hierarchic (2nd to 5th order accurate), adaptive, stabilized (finite-element) transient analysis tool for the solution of compressible or incompressible flows. It falls under the realm of computational/numerical methods for solving partial differential equations which have matured for a wide range of physical problems including ones in fluid mechanics, electromagnetics, biomechanics, to name a few. It has effectively applied recent anisotropic adaptive algorithms [19,25,26] along with advanced numerical models of flow physics [7,10, 33,34,35,36]. We observe when modest amounts of real work occur (such as 1 million multiply-add operations (MADDS)) between subsequent global allreduce operations, the time spent in allreduce increases significantly due to OS interference and in turn leads to the loss of strong scaling at relatively fewer core counts

Basics of flow solver
Parallel paradigm
Near petascale systems
Strong scaling results and analysis
Parallel performance results
Parallel performance analysis
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call