Abstract

Abstract. Mixed-precision approaches can provide substantial speed-ups for both computing- and memory-bound codes with little effort. Most scientific codes have overengineered the numerical precision, leading to a situation in which models are using more resources than required without knowing where they are required and where they are not. Consequently, it is possible to improve computational performance by establishing a more appropriate choice of precision. The only input that is needed is a method to determine which real variables can be represented with fewer bits without affecting the accuracy of the results. This paper presents a novel method that enables modern and legacy codes to benefit from a reduction of the precision of certain variables without sacrificing accuracy. It consists of a simple idea: we reduce the precision of a group of variables and measure how it affects the outputs. Then we can evaluate the level of precision that they truly need. Modifying and recompiling the code for each case that has to be evaluated would require a prohibitive amount of effort. Instead, the method presented in this paper relies on the use of a tool called a reduced-precision emulator (RPE) that can significantly streamline the process. Using the RPE and a list of parameters containing the precisions that will be used for each real variable in the code, it is possible within a single binary to emulate the effect on the outputs of a specific choice of precision. When we are able to emulate the effects of reduced precision, we can proceed with the design of the tests that will give us knowledge of the sensitivity of the model variables regarding their numerical precision. The number of possible combinations is prohibitively large and therefore impossible to explore. The alternative of performing a screening of the variables individually can provide certain insight about the required precision of variables, but, on the other hand, other complex interactions that involve several variables may remain hidden. Instead, we use a divide-and-conquer algorithm that identifies the parts that require high precision and establishes a set of variables that can handle reduced precision. This method has been tested using two state-of-the-art ocean models, the Nucleus for European Modelling of the Ocean (NEMO) and the Regional Ocean Modeling System (ROMS), with very promising results. Obtaining this information is crucial to build an actual mixed-precision version of the code in the next phase that will bring the promised performance benefits.

Highlights

  • Global warming and climate change are a great challenge for human kind, and given the social (e.g., Kniveton et al, 2012, regarding climate refugees), economic (e.g., Whiteman and Hop, 2013, regarding the trillion dollar problem) and environmental threat (e.g., Bellard et al, 2012, regarding mass extinctions) that they pose, any effort to understand and fight them falls short

  • Tintó Prims et al.: How to use mixed precision in ocean models: Nucleus for European Modelling of the Ocean (NEMO) 4.0 and Regional Ocean Modeling System (ROMS) 3.6 tious policies that adapt to future scenarios (Oreskes et al, 2010)

  • We propose a method that automatically explores the precision required for the real variables used in state-of-the-art Earth system models (ESMs)

Read more

Summary

Introduction

Global warming and climate change are a great challenge for human kind, and given the social (e.g., Kniveton et al, 2012, regarding climate refugees), economic (e.g., Whiteman and Hop, 2013, regarding the trillion dollar problem) and environmental threat (e.g., Bellard et al, 2012, regarding mass extinctions) that they pose, any effort to understand and fight them falls short. The magnitude and complexity of these systems make it difficult for scientists to observe and understand them fully For this reason, the birth of computational science was a turning point, leading to the development of Earth system models (ESMs) that allowed for the execution of experiments that were impossible until . ESMs, despite being incomplete, inaccurate and uncertain, have been a framework in which it is possible to build upon knowledge and have become crucial tools (Hegerl and Zwiers, 2011) Since their inception, the capability to mimic the climate system has increased and with it the capacity to perform useful forecasts (Bauer et al, 2015). The huge investment in computational resources that is necessary to perform simulations with ESMs implies that investing time in optimizing them will be of value

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call