Abstract

SummaryMore and more massive parallel codes running on several hundreds of thousands of cores are entering the computational science and engineering domain, allowing high‐fidelity computations on up to trillions of unknowns for very detailed analyses of the underlying problems. Such runs typically produce gigabytes of data, hindering both efficient storage and (interactive) data exploration. Advanced approaches based on inherently distributed data formats such as hierarchical data format version 5 become necessary here to avoid long latencies when storing the data and to support fast (random) access when retrieving the data for visual processing. This paper shows considerations and implementation aspects of an I/O kernel based on hierarchical data format version 5 that supports fast checkpointing, restarting, and selective visualisation using a single shared output file for an existing computational fluid dynamics framework. This functionality is achieved by including the framework's hierarchical data structure in the file, which also opens the door for additional steering functionality. Finally, the performance of the kernel's write routines are presented. Bandwidths close to the theoretical peak on modern supercomputing clusters were achieved by avoiding file‐locking and using collective buffering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call