Abstract

Modern cloud scale data-centers are adopting workload co-location as an effective mechanism for improving resource utilization. However, workload co-location is stressing resource availability in unconventional and unpredictable manner. Efficient resource management requires continuous and ideally predictive runtime knowledge of system metrics, sensitive both to workload demands, e.g., CPU, memory etc., as well as interference effects induced by co-location. In this paper, we present Rusty, a framework able to address the aforementioned challenges by leveraging the power of Long Short-Term Memory networks to forecast at runtime, performance metrics of applications executed on systems under interference. We evaluate Rusty under a diverse set of interference scenarios for a plethora of cloud workloads, showing that Rusty achieves extremely high prediction accuracy, up to 0.99 in terms of $R^2$R2 value, satisfying at the same time the strict latency constraints to be usable at runtime.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call