Abstract

It is challenging to reduce resource over-provisioning for enterprise applications while maintaining service level objectives (SLOs) due to their time-varying and stochastic workloads. In this paper, we study the effect of prediction on dynamic resource allocation to virtualized servers running enterprise applications. We present predictive controllers using three different prediction algorithms based on a standard auto-regressive (AR) model, a combined ANOVA-AR model, as well as a multi-pulse (MP) model. We compare the properties of the predictive controllers with an adaptive integral (I) controller designed in our earlier work on controlling relative utilization of resource containers. The controllers are evaluated in a hypothetical virtual server environment where we use the CPU utilization traces collected on 36 servers in an enterprise data center. Since these traces were collected in an open-loop environment, we use a simple queuing algorithm to simulate the closed-loop CPU usage under dynamic control of CPU allocation. We also study the controllers by emulating the utilization traces on a test bed where a Web server was hosted inside a Xen virtual machine. We compare the results of these controllers from all the servers and find that the MP-based predictive controller performed slightly better statistically than the other two predictive controllers. The ANOVA-AR-based approach is highly sensitive to the existence of periodic patterns in the trace, while the other three methods are not. In addition, all the three predictive schemes performed significantly better when the prediction error was accounted for using a feedback mechanism. The MP-based method also demonstrated an interesting self-learning behavior

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call