Abstract

SUMMARYDistributed computing infrastructures are commonly used for scientific computing, and science gateways provide complete middleware stacks to allow their transparent exploitation by end users. However, administrating such systems manually is time consuming and sub‐optimal because of the complexity of the execution conditions. Algorithms and frameworks aiming at automating system administration must deal with online and non‐clairvoyant conditions, where most parameters are unknown and evolve over time. We consider the problem of controlling task granularity and fairness among scientific workflows executed in these conditions. We present two self‐managing loops monitoring the fineness, coarseness, and fairness of workflow executions, comparing these metrics with thresholds extracted from knowledge acquired in previous executions and planning appropriate actions to maintain these metrics to appropriate ranges. Experiments on the European Grid Infrastructure show that our task granularity control can speed up executions up to a factor of 2 and that our fairness control reduces slowdown variability by 3–7 compared with first‐come, first‐served. We also study the interaction between granularity control and fairness control: our experiments demonstrate that controlling task granularity degrades fairness but that our fairness control algorithm can compensate this degradation. Copyright © 2014 John Wiley & Sons, Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.