Abstract

We consider a general single-server multiclass queueing system that incurs a delay cost $C_k(\tau_k)$ for each class $k$ job that resides $\tau_k$ units of time in the system. This paper derives a scheduling policy that minimizes the total cumulative delay cost when the system operates during a finite time horizon. Denote the marginal delay cost function and the (possibly nonstationary) average processing time of class $k$ by $c_k = C'_k$ and $1/\mu_k$, respectively, and let $a_k(t)$ be the age or time that the oldest class $k$ job has been waiting at time $t$. We call the scheduling policy that at time $t$ serves the oldest waiting job of that class $k$ with the highest index $\mu_k(t)c_k(a_k(t))$, the generalized $c\mu$ rule. As a dynamic priority rule that depends on very little data, the generalized $c\mu$ rule is attractive to implement. We show that, with nondecreasing convex delay costs, the generalized $c\mu$ rule is asymptotically optimal if the system operates in heavy traffic and give explicit expressions for the associated performance characteristics: the delay (throughput time) process and the minimum cumulative delay cost. The optimality result is robust in that it holds for a countable number of classes and several homogeneous servers in a nonstationary, deterministic or stochastic environment where arrival and service processes can be general and interdependent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call