Abstract

Today’s cloud-based online services are underpinned by distributed key-value stores (KVSs). Keys and values are distributed across back-end servers in such scale-out systems. One primary real-life performance bottleneck occurs when storage servers suffer from load imbalance under skewed workloads. In this paper, we present KVSwitch, a centralized self-managing load balancer that leverages the power and flexibility of emerging programmable switches. The balance is achieved by dynamically predicting the hot items and by creating replication strategies according to KVS loading. To overcome the challenges in realizing KVSwitch given the limitations of the switch hardware, we decompose KVSwitch’s functions and carefully design them for the heterogeneous processors inside the switch. We prototype KVSwitch in a Tofino switch. Experimental results show that our solution can effectively keep the KVS servers balanced even under highly skewed workloads. Furthermore, KVSwitch only replicates 70 % of hot items and consumes 9.88 % of server memory rather than simply replicating all hot items to each server.

Highlights

  • Today’s Internet services, such as search engines, e-commerce, and social networking, critically depend on high-performance key-value stores (KVSs)

  • The results demonstrate that KVSwitch is able to achieve satisfactory load balance with only 9.88% resource consumption on servers compared to copying hot items to all servers (Section 5)

  • We compare the balancing performance between KVSwitch and NetCache [8], with which the hot items are cached in application-specific integrated circuit (ASIC) pipeline in programmable switches to balance the KVS

Read more

Summary

Introduction

Today’s Internet services, such as search engines, e-commerce, and social networking, critically depend on high-performance key-value stores (KVSs). These KVSs must provide high throughput in order to process copious queries from millions of users while meeting online response time requirements. The set of hot items changes rapidly due to popular posts and trending events [5,6,7]. This is because KVS workloads typically exhibit highly skewed request patterns. In the presence of this skew, the servers holding the hot items become saturated, causing performance degradation (throughput is reduced, and response time increases)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call