Abstract

Cloud computing is advancing rapidly. With such advancement, it has become possible to develop and host large scale distributed applications on the Internet more economically and more flexibly. However, the geographical distribution of user bases, the available Internet infrastructure within those geographical areas, and the dynamic nature of usage patterns of the user bases are critical factors that affect the performance of these applications. Therefore, it is necessary to compromise between datacenters, service broker policies, and load balancing algorithms to optimize the performance of the application and the cost to the owners. This paper aims at studying the effect of service broker policies and load balancing algorithms on the performance of large-scale Internet applications under different configurations of datacenters. To achieve this goal, we modeled the behavior of the popular Facebook application with the most recent worldwide users’ statistics. Then, we evaluated the performance of this application under different configurations of datacenters using: 1) two different service broker policies, namely, closest datacenter and optimum response time; and 2) three load-balancing algorithms, namely, round robin, equally spread current execution, and throttled load balancer. The overall average response time of the application and the overall average time spent for processing a user request by a datacenter are measured and the results are discussed. This study would help service providers generate valuable insights on coordination between datacenters, service policies, and load balancing algorithms when designing Cloud infrastructure services in geographically distributed areas. In addition, application designers would benefit greatly from this study in identifying the optimal arrangement for their applications.

Highlights

  • Cloud computing (CC) has become a prevalent technology in recent years

  • We examined the performance of this application in different scenarios and using: 1) two different service broker policies; namely, closest datacenter and optimum response time; and 2) three load balancing algorithms, namely, round robin, spread current execution, and throttled load balancer

  • The overall average response time of the application and the overall average time spent for processing a user request by a datacenter are recorded and the results are discussed

Read more

Summary

Introduction

Cloud computing (CC) has become a prevalent technology in recent years. It provides a flexible and straightforward approach for maintaining and recovering information. It facilitates the collection of extensive information and the dissemination of records to various clients around the globe. Dealing with these vast information collections requires several strategies to enhance and streamline operations and provide attractive levels of execution to clients. CC incorporates computational and capacity benefits through a pay-per-use business model. It is likely the main innovation that totally supplements the web, “cloud computing alludes to registering on the Internet, instead of processing on a desktop” [3]

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call