Abstract

In data center (DC) environments, machine learning algorithms play an important role in resource management to increase efficiency by means of proper predictive monitoring workload trends and adjusting jobs accordingly. In this paper, we propose a system to predict the CPU usage of virtual machines (VMs) of a DC. Our proposal performs clustering of VMs based on their historical information (i.e., time series) by evaluating several traditional ML algorithms using common statistical features of VM time series, which facilitates grouping VMs with similar behaviors and establishing clusters based on these features. Then, training of representative models is performed to finally choose the one with the lowest mean error per cluster. The simulation results show that by performing clustering and training the model with representative time series, it is indeed possible to obtain a low mean error while reducing the local training time per individual VM.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.