Abstract

A fluid approximation gives the main term in the asymptotic expression of the value function for a controllable stochastic network. The policies that have the same asymptotic of their value functions as the value function of the optimal policy are called asymptotically optimal policies. We consider the problem of finding from this set of asymptotically optimal policies a best one in the sense that the next term of its asymptotic expression is minimal. The analysis of this problem is closely connected with large-deviations problems for a random walk.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call