Abstract
SummarySymbolic computation has underpinned a number of key advances in Mathematics and Computer Science. Applications are typically large and potentially highly parallel, making them good candidates for parallel execution at a variety of scales from multi‐core to high‐performance computing systems. However, much existing work on parallel computing is based around numeric rather than symbolic computations. In particular, symbolic computing presents particular problems in terms of varying granularity and irregular task sizes that do not match conventional approaches to parallelisation. It also presents problems in terms of the structure of the algorithms and data. This paper describes a new implementation of the free open‐source GAP computational algebra system that places parallelism at the heart of the design, dealing with the key scalability and cross‐platform portability problems. We provide three system layers that deal with the three most important classes of hardware: individual shared memory multi‐core nodes, mid‐scale distributed clusters of (multi‐core) nodes and full‐blown high‐performance computing systems, comprising large‐scale tightly connected networks of multi‐core nodes. This requires us to develop new cross‐layer programming abstractions in the form of new domain‐specific skeletons that allow us to seamlessly target different hardware levels. Our results show that, using our approach, we can achieve good scalability and speedups for two realistic exemplars, on high‐performance systems comprising up to 32000 cores, as well as on ubiquitous multi‐core systems and distributed clusters. The work reported here paves the way towards full‐scale exploitation of symbolic computation by high‐performance computing systems, and we demonstrate the potential with two major case studies. © 2016 The Authors. Concurrency and Computation: Practice and Experience Published by John Wiley & Sons Ltd.
Highlights
This paper considers how parallelism can be provided in a production symbolic computation system, GAP (Groups, Algorithms, Programming [1]), to meet the demands of a variety of users
We have provided a systematic description of high-performance computing (HPC)-GAP, a thorough re-engineering of the GAP computational algebra system for the 21st Century, which incorporates mechanisms to deal with parallelism at the multicore, distributed cluster and HPC levels
We have developed MPIGAP to exploit ubiquitous small clusters and designed and implemented a highly sophisticated coordination system for HPC systems, SymGridPar2, which uses parallel Haskell to coordinate large-scale GAP computations
Summary
This paper considers how parallelism can be provided in a production symbolic computation system, GAP (Groups, Algorithms, Programming [1]), to meet the demands of a variety of users. Our work establishes symbolic computation as a new and exciting application domain for HPC It provides a vade mecum for the process of producing effective high-performance versions of large legacy systems. The systematic description of HPC-GAP as an integrated suite of new language extensions and libraries for parallel symbolic computation These are a thread-safe multi-core implementation, GAP5 (Section 2); an MPI binding to exploit clusters (Section 3) and the SymGridPar framework that provides symbolic computation at HPC scale (Section 4). These allow us to address scalability at multiple levels of abstraction up to large-scale HPC systems (Section 5).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Concurrency and Computation: Practice and Experience
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.