Abstract

We consider a Markovian model for a distributed firm real-time system, with a homogeneous job arrival stream and multiple heterogeneous clusters of servers, each having its own queue and server pool. Upon a job's arrival, it is decided whether to reject it, at a cost of R, or to accept it and then route it to some cluster, to await processing in first-come first-served fashion. Jobs come with firm deadlines, to the beginning or to the end of service, reneging if they are missed, at a cost of 1. Given the intractability of finding an average-cost optimal admission control and routing policy, we consider a static policy (optimal Bernoulli splitting (BS)), and four dynamic policies based on numeric indices attached to individual queues as functions of their current congestion: individually optimal (IO), policy improvement (PI) upon the optimal BS, restless bandit (RB), and a novel hybrid PI-RB policy. Index-computing algorithms with linear complexity are presented. A numerical study on two-cluster instances is reported, where the policies are benchmarked against the optimal cost performance as model parameters are varied one at a time. The study reveals that the PI-RB index policy is consistently near optimal.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.