Abstract

Optimal load distribution and speed scaling in a heterogeneous data center that takes account of the tradeoff between the maintenance and operation costs and system performance are crucial issues of cloud computing. Due to the changing states of cloud centers and diversity of the task arrival rate, a static control model is infeasible for cloud computing. In this article, we aim to provide a novel multi-server control model with dynamic feedback to acquire dynamic states of the cloud system, and with queue waiting cost-awareness to optimize the queue wait time and load distribution in task assignment and server configuration management. Using the technology of speed scaling, each server in a data center is configured as an <inline-formula><tex-math notation="LaTeX">$M/M/1$</tex-math></inline-formula> queue system with variable service rate, and the service rate is a function of the length of the task queue. We formulate two optimization problems, the optimal load distribution problem and the optimal service rate controlling problem, and provide algorithms to solve these problems, facilitating load distributions and service rate adjustments. We also present numerical simulations to validate our model. The results show our model to be efficient in multi-server dynamic configurations and task assignments according to the feedback information for the tradeoff between the system cost and performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.