Abstract

This paper concerns control of stochastic networks using state-dependent safety-stocks. Three examples are considered: a pair of tandem queues; a simple routing model; and the Dai-Wang re-entrant line. In each case, a single policy is proposed that is independent of network load /spl rho//sub /spl middot//. The following conclusions are obtained for the controlled network, where the finite constant K/sub 0/ is independent of load. (i) An optimal policy for a one-dimensional relaxation stores all inventory in a single buffer i*. The policy for the (unrelaxed) stochastic network maintains for each k /spl ges/ 0, E/sub i/spl ne/i*/[/spl Sigma/ Q/sub i/(k)]/spl les/ K/sub 0/E[log(1+Q/sub i*/(k))], where Q(k) is the /spl lscr/ -dimensional vector of buffer lengths at time k, initialized at Q(0) = 0. (i) The policy is fluid-scale optimal, and approximately average-cost optimal: The steady-state cost /spl eta/ satisfies the bound /spl eta//sub *//spl les//spl eta//spl les//spl eta//sub */ + K/sub 0/ log(/spl eta//sub */), 0 < /spl rho//spl middot/ < 1, where /spl eta//sub */ is the optimal steady-state cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call