Abstract

The model is motivated by the problem of load distribution in large-scale cloud-based data processing systems. We consider a heterogeneous service system, consisting of multiple large server pools. The pools are different in that their servers may have different processing speed and/or different buffer sizes (which may be finite or infinite). We study an asymptotic regime in which the customer arrival rate and pool sizes scale to infinity simultaneously, in proportion to some scaling parameter $n$. Arriving customers are assigned to the servers by a "router", according to a {\em pull-based} algorithm, called PULL. Under the algorithm, each server sends a "pull-message" to the router, when it becomes idle; the router assigns an arriving customer to a server according to a randomly chosen available pull-message, if there are any, or to a random server, otherwise. Assuming sub-critical system load, we prove asymptotic optimality of PULL. Namely, as system scale $n\to\infty$, the steady-state probability of an arriving customer experiencing blocking or waiting, vanishes. We also describe some generalizations of the model and PULL algorithm, for which the asymptotic optimality still holds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call