Abstract

Cloud computing allows several applications to share physical resources, yielding rapid provisioning and improving hardware utilization. However, multiple applications contending for shared resources are susceptible to interference, which might lead to significant performance degradation and consequently an increase in Service Level Agreements violations. In previous work, we started to analyze resource contention and its impact on performance degradation and hardware utilization. Then, we created an interference-aware application classifier based on machine learning techniques and evaluated it comparing two classification strategies: (i) unique, when a single classification is performed over the entire applications’ execution; and (ii) segmented, when the classification is carried out over multiple static-defined intervals. Moving towards a dynamic scheduling solution, we combine and improve on previous work findings and, in this work, we present IADA, a full-fledged dynamic interference-aware cloud scheduling architecture for latency-sensitive workloads. Our approach consists in improving on a segmented interference classification of applications to a dynamic classification scheme based on workload variations. Aiming at using the available resource more efficiently and respecting Quality of Services requirements, the proposed architecture was developed supported by machine learning techniques, heuristics, and a bayesian changepoint detection algorithm for online inference. We conducted a set of real and simulated experiments, utilizing a developed extension of CloudSim Toolkit to analyze and compare the proposed architecture efficiency with related studies. Results evidenced that IADA reduces by 25%, on average, the overall performance degradation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call