Abstract

Consistent hashing has played an indispensable role in cloud infrastructure, although its load balancing performance is not necessarily perfect. Consistent hashing has long remained the most widely used method despite many methods being proposed to improve load balancing because these methods trade off load balancing against consistency, memory usage, lookup performance, and/or fault-tolerance. This article presents Structured Allocation-based Consistent Hashing (SACH), a cloud-optimized consistent hashing algorithm that overcomes the trade-offs by taking advantage of the characteristics of cloud environments: scaling management and auto-healing. Since scaling can be distinguished from failures, SACH applies two different algorithms to update hashing functions: a fast-update algorithm for unmanaged backend failures to satisfy fault-tolerance with quick response and a slow-update algorithm for managed scaling. Hashing functions are initialized or slow-updated considering the characteristics of the fast-update algorithm to satisfy load balancing and the other properties as far as the number of failed backends is kept small by auto-healing. The experimental results show that SACH outperforms existing algorithms in each aspect. SACH will improve the load balancing of cloud infrastructure components, where the trade-offs have prevented the renewal of hashing functions.

Highlights

  • CONSISTENT hashing (CH) [11], [23] has become an essential building block in many cloud infrastructure components such as network load balancers and distributed databases [6], [7]

  • We investigated the effect of the setting of the Structured Allocation-based Consistent Hashing (SACH)-specific parameter df

  • When measuring load balancing with failed backends, we show the average value of 100 trials where failed backends were selected at random

Read more

Summary

Introduction

CONSISTENT hashing (CH) [11], [23] has become an essential building block in many cloud infrastructure components such as network load balancers and distributed databases [6], [7]. Even if the set of backends changes, CH avoids remapping of keys unnecessarily; a key mapped once to a backend is consistently assigned to the same backend in most cases. This consistency is a crucial property that enables network load balancers to maintain TCP connections and that prevents distributed database systems from disruption by huge data-replacement. In P2P-like environments, the joining and leaving of backends are entirely uncontrollable. It is valuable to reconsider cloud-optimized hashing with consistency, which achieves more even load balancing at the expense of some of the robustness of CH

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call