Abstract

For dynamic resource scheduling in cloud data centers, a novel lightweight simulation system is proposed; two existing simulation systems at the application level for cloud computing are reviewed; and results gained using the suggested simulation system are examined and discussed. The usage of resources and energy efficiency in cloud data centers can be improved by load balancing and the consolidation of virtual machines. An aspect of dynamic virtual machine consolidation that directly affects resource usage and the quality of service the system is delivering is the timing of when it is ideal to reallocate Virtual Machines from an overloaded host [1]. Because server overloads result in a lack of resources and a decline in application performance, they have an impact on quality of service. In order to determine the best answer, existing approaches to the problem of host overload detection typically rely on statistical analysis inspired by nature. These strategies' drawbacks include the fact that they provide less-than-ideal outcomes and prevent the explicit articulation of a Quality-of-Service target. By optimizing the mean inter-migration time under the defined Quality of Service target ideally, we present a novel method for detecting host overload for any stationary workload that is known and a particular state configuration [2]. We demonstrate that our technique exceeds the best benchmark algorithm and offers over 88%of the performance of the ideal offline algorithm through simulations with real-world workload traces from more than a thousand Virtual Machines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call