Abstract

A combination of distributed multi-tenant infrastructures, such as public Clouds and onpremises installations belonging to different organisations, are frequently used for scientific research because of the high computational requirements involved. Although resource sharing maximises their usage, it typically causes undesirable effects such as the noisy neighbour, producing unpredictable variations of the infrastructure computing capabilities. These fluctuations affect execution efficiency, even of loosely coupled applications, such as many Monte Carlo based simulation programs. This highlights the need of a service capable to handle workload distribution across multiple infrastructures to mitigate these unpredictable performance fluctuations. With this aim, this work introduces TaScaaS, a highly scalable and completely serverless service deployed on AWS to distribute loosely coupled jobs among several computing infrastructures, and load balance them using a completely asynchronous approach to cope with the performance fluctuations with minimum impact in the execution time. We demonstrate how TaScaaS is not only capable of handling these fluctuations efficiently, achieving reduction in execution times up to 45% in our experiments, but also split the jobs to be computed to meet the user-defined execution time.

Highlights

  • The use of huge computational power is commonly required in science and engineering to be able to perform computational experiments

  • In Monte Carlo simulations of radiation transport applied to the calculus of ionisation chamber correction factors, the work presented by Christian et al [1] required more than 30000 CPU hours to simulate a single case consisting on more than 7 · 1011 primary particles, and Vicent et al [2] reported approximately 13800 CPU hours to simulate each combination of ionisation chamber and photon beam considered in the study, which results in a total of 745200 CPU hours

  • Due the importance of the radiation transport simulations in clinical applications and the long execution times involved, we have executed Monte Carlo simulations using PenRed [29], which is a framework for radiation transport simulations using Monte Carlo techniques

Read more

Summary

Introduction

The use of huge computational power is commonly required in science and engineering to be able to perform computational experiments. In Monte Carlo simulations of radiation transport applied to the calculus of ionisation chamber correction factors, the work presented by Christian et al [1] required more than 30000 CPU hours to simulate a single case consisting on more than 7 · 1011 primary particles, and Vicent et al [2] reported approximately 13800 CPU hours to simulate each combination of ionisation chamber and photon beam considered in the study, which results in a total of 745200 CPU hours.

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call