Abstract

In this paper we present the benefit of using deep learning time-series analysis techniques in order to reduce computing resource usage, with the goal of having more sustainable data centers. Modern enterprises and agile ways-of-working have led to a complete revolution of the way that software engineers develop and deploy software, with the proliferation of container-based technology such as Kubernetes and Docker. Modern systems tend to use up a large amount of resources even when idle, and intelligent scaling is one of the methods that could be used to prevent waste. We present several methods of predicting computer resource usage based on historical data of real production distributed software systems at the European Organization for Nuclear Research (CERN), enabling down-scaling the number of machines running a certain service during periods that have been identified as idle. The method leverages recurring neural network architectures in order to accurately predict the CPU future usage of a software system given its past activity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call