Abstract

A number of features of today's high-performance computers make it challenging to exploit these machines fully for computational science. These include increasing core counts but stagnant clock frequencies; the high cost of data movement; use of accelerators (GPUs, FPGAs, coprocessors), making architectures increasingly heterogeneous; and multi- ple precisions of floating-point arithmetic, including half-precision. Moreover, as well as maximizing speed and accuracy, minimizing energy consumption is an important criterion. New generations of algorithms are needed to tackle these challenges. We discuss some approaches that we can take to develop numerical algorithms for high-performance computational science, with a view to exploiting the next generation of supercomputers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

Highlights

  • High-performance computing (HPC) illustrates well the rapid pace of technological change

  • A current high-end smartphone can perform linear algebra computations at speeds substantially exceeding that of a Cray-1, which was first installed in 1976 and was widely regarded as the first successful supercomputer

  • In this paper we discuss some of the approaches we can take to developing numerical algorithms for high-performance computational science and we look towards the generation of supercomputers and discuss the challenges they will bring

Read more

Summary

Introduction

High-performance computing (HPC) illustrates well the rapid pace of technological change. Just a few years ago, teraFLOP/s (1012 floating-point operations/second) and terabytes (1012 bytes of secondary storage) defined state-of-the-art HPC. Today, those same values represent a PC with an NVIDIA accelerator and local storage. In 2019, HPC is defined by multiple petaFLOP/s (1015 floating-point operations/second) supercomputing systems and cloud data centers with many exabytes of secondary storage. Computers attaining an exascale rate of computation (1018 floating-point operations per second) will soon be available, and for their success we will need numerical software that extracts good performance from these massively parallel machines. In this paper we discuss some of the approaches we can take to developing numerical algorithms for high-performance computational science and we look towards the generation of supercomputers and discuss the challenges they will bring.

Mixed Precision Algorithms
Algorithms minimizing data transfer
Findings
Exploiting Data Sparsity
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call