Abstract

We consider the problem of controlling a two-server Markovian queueing system with heterogeneous servers. The servers are differentiated by their service rates and reliability attributes (i.e., the slower server is perfectly reliable, whereas the faster server is subject to random failures). The aim is to dynamically route customers at arrival, service completion, server failure, and server repair epochs to minimize the long-run average number of customers in the system. Using a Markov decision process model, we prove that it is always optimal to route customers to the faster server when it is available, irrespective of its failure and repair rates, if the system is stable. For the slower server, there exists an optimal threshold policy that depends on the queue length and the state of the faster server. Additionally, we analyze a variant of the main model in which there are multiple unreliable servers with identical service rates, but distinct reliability characteristics. For that case it is always optimal to route customers to idle servers, and the optimal policy is insensitive to the servers’ reliability characteristics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.