Abstract

In this paper we argue that scientific applications traditionally considered as representing typical HPC workloads can be successfully and efficiently ported to a cloud infrastructure. We propose a porting methodology that enables parallelization of communication – and memory-intensive applications while achieving a good communication to computation ratio and a satisfactory performance in a cloud infrastructure. This methodology comprises several aspects: (1) task agglomeration heuristic enabling increasing granularity of tasks while ensuring they will fit in memory; (2) task scheduling heuristic increasing data locality; and (3) two-level storage architecture enabling in-memory storage of intermediate data. We implement this methodology in a scientific workflow system and use it to parallelize a multi-frontal solver for finite-element meshes, deploy it in a cloud, and execute it as a workflow. The results obtained from the experiments confirm that the proposed porting methodology leads to a significant reduction of communication costs and achievement of a satisfactory performance. We believe that these results constitute a valuable step toward a wider adoption of cloud infrastructures for computational science applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call