Abstract
The remarkable development of cloud computing in the past few years, and its proven ability to handle web hosting workloads, is prompting researchers to investigate whether clouds are suitable to run large-scale computations. Cloud load balancing is one of the solution to provide reliable and scalable cloud services. Especially, load balancing for the multimedia streaming requires dynamic and real-time load balancing strategies. With this context, this paper aims to propose an Inter Cloud Manager (ICM) job dispatching algorithm for the large-scale cloud environment. ICM mainly performs two tasks: clustering (neighboring) and decision-making. For clustering, ICM uses Hello packets that observe and collect data from its neighbor nodes, and decision-making is based on both the measured execution time and network delay in forwarding the jobs and receiving the result of the execution. We then run experiments on a large-scale laboratory test-bed to evaluate the performance of ICM, and compare it with well-known decentralized algorithms such as Ant Colony, Workload and Client Aware Policy (WCAP), and the Honey-Bee Foraging Algorithm (HFA). Measurements focus in particular on the observed total average response time including network delay in congested environments. The experimental results show that for most cases, ICM is better at avoiding system saturation under the heavy load.
Highlights
1 Introduction In the past two decades, cloud computing has emerged as an enabling technology and it has been increasingly adopted in many areas including business, science, and engineering because of its inherent scalability, flexibility, and cost-effectiveness [1]
ENT is calculated by using the formula over (1 – loss rate)
expected job transfer time (EJTT) is calculated by avg. job execution time ×
Summary
In the past two decades, cloud computing has emerged as an enabling technology and it has been increasingly adopted in many areas including business, science, and engineering because of its inherent scalability, flexibility, and cost-effectiveness [1]. The data center classifies jobs according to the service-level agreement and requested services. Each job is assigned to one of the available servers. The servers perform the requested job, and a response is transmitted back to Scalability in cloud is one of the major advantages brought by the cloud paradigm. Efficient use of computing resources to minimize the execution time requires a job dispatching algorithm that can appropriately determine the assignment of jobs. Due to this variety of jobs and servers, task scheduling mechanisms for single system may not be effective in a distributed environment [13,14,15]. Mobile devices [16] are becoming one of the major sources of the workload for clouds in order to save energy at the mobile device itself and avoid moving bulky data over wireless networks between the mobile devices and data repositories
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: EURASIP Journal on Wireless Communications and Networking
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.