Abstract

The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.

Highlights

  • Cloud computing is an online-based computing that provides shared processing resources and data to computers and other components on demand

  • Dynamic Voltage and Frequency Scaling (DVFS) is used and results were acquired for all three architectures and the results showed that when different schemes of power management were applied on both computational components and communicational components, they were applicable

  • ; we study the scalability of the DVFS and we highlight how it performs with different schedulers

Read more

Summary

Introduction

Cloud computing is an online-based computing that provides shared processing resources and data to computers and other components on demand. Ever since the cost of powering and cooling in data centers increased, improving energy efficiency has been a major issue in cloud computing. They include three layers: access, aggregation, and core layers. A typical model of 3-tier architecture comprises 8 switches in core network This architecture has an eight of Equal-Cost Multi-Path routing (ECMP) that contains 10 GE Line Aggregation Groups (LAGs) [8]. The architectures of 3-tier high speed are proposed to optimize computational nodes; capacities of communication nodes (core and aggregation networks) are a bottleneck, because they limit the number of nodes in data center. Both of the two architectures are new in research area and they were not tested in real data centers, and their advantages are not very clear in large data centers [1]

Research Scope
Literature Review
Scalability of the DVFS Power Management Technique
Experimental Setup and Environment
Data Center Schedulers
Experimental Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call