Abstract

SummarySymbolic computation has underpinned a number of key advances in Mathematics and Computer Science. Applications are typically large and potentially highly parallel, making them good candidates for parallel execution at a variety of scales from multi‐core to high‐performance computing systems. However, much existing work on parallel computing is based around numeric rather than symbolic computations. In particular, symbolic computing presents particular problems in terms of varying granularity and irregular task sizes that do not match conventional approaches to parallelisation. It also presents problems in terms of the structure of the algorithms and data. This paper describes a new implementation of the free open‐source GAP computational algebra system that places parallelism at the heart of the design, dealing with the key scalability and cross‐platform portability problems. We provide three system layers that deal with the three most important classes of hardware: individual shared memory multi‐core nodes, mid‐scale distributed clusters of (multi‐core) nodes and full‐blown high‐performance computing systems, comprising large‐scale tightly connected networks of multi‐core nodes. This requires us to develop new cross‐layer programming abstractions in the form of new domain‐specific skeletons that allow us to seamlessly target different hardware levels. Our results show that, using our approach, we can achieve good scalability and speedups for two realistic exemplars, on high‐performance systems comprising up to 32000 cores, as well as on ubiquitous multi‐core systems and distributed clusters. The work reported here paves the way towards full‐scale exploitation of symbolic computation by high‐performance computing systems, and we demonstrate the potential with two major case studies. © 2016 The Authors. Concurrency and Computation: Practice and Experience Published by John Wiley & Sons Ltd.

Highlights

  • This paper considers how parallelism can be provided in a production symbolic computation system, GAP (Groups, Algorithms, Programming [1]), to meet the demands of a variety of users

  • We have provided a systematic description of high-performance computing (HPC)-GAP, a thorough re-engineering of the GAP computational algebra system for the 21st Century, which incorporates mechanisms to deal with parallelism at the multicore, distributed cluster and HPC levels

  • We have developed MPIGAP to exploit ubiquitous small clusters and designed and implemented a highly sophisticated coordination system for HPC systems, SymGridPar2, which uses parallel Haskell to coordinate large-scale GAP computations

Read more

Summary

INTRODUCTION

This paper considers how parallelism can be provided in a production symbolic computation system, GAP (Groups, Algorithms, Programming [1]), to meet the demands of a variety of users. Our work establishes symbolic computation as a new and exciting application domain for HPC It provides a vade mecum for the process of producing effective high-performance versions of large legacy systems. The systematic description of HPC-GAP as an integrated suite of new language extensions and libraries for parallel symbolic computation These are a thread-safe multi-core implementation, GAP5 (Section 2); an MPI binding to exploit clusters (Section 3) and the SymGridPar framework that provides symbolic computation at HPC scale (Section 4). These allow us to address scalability at multiple levels of abstraction up to large-scale HPC systems (Section 5).

Computational algebra and the GAP system
Parallelism and high-performance computing
Parallel computational algebra
PARALLELISM SUPPORT IN GAP5
Task introduction and management
SumEuler in GAP5
Shared regions in GAP5
Comparison with other parallel computational algebra systems
MPI-GAP DESIGN AND IMPLEMENTATION
SumEuler in MPI-GAP
THE DESIGN AND IMPLEMENTATION OF SYMGRIDPAR2
Coordination DSL
GAP binding
The SymGridPar2 programming model
Advanced features
PERFORMANCE EVALUATION
GAP5 evaluation
MPI-GAP evaluation
SGP2 evaluation
HPC-GAP interworking
Orbits in GAP5
Hecke algebras in SGP2
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call