Abstract

We design a dynamic rate scheduling policy of Markov type by using the solution (a social optimal Nash equilibrium point) to a utility-maximization problem over a randomly evolving capacity set for a stochastic system of generalized processor-sharing queues in a random environment whose job arrivals to each queue follow a doubly stochastic renewal process (DSRP). Both the random environment and the random arrival rate of each DSRP are driven by a finite state continuous time Markov chain. The scheduling policy optimizes in a greedy fashion with respect to each queue and environmental state. Since the closed-form solution for the performance of such a queuing system under the policy is difficult to obtain, we establish a reflecting diffusion with regime-switching model for its measures of performance. Furthermore, we justify its asymptotic optimality by deriving the stochastic fluid and diffusion limits for the corresponding system under heavy traffic. In addition, we identify a cost function related to the utility function, which is minimized by minimizing the workload process in the diffusion limit. More importantly, our queuing model includes typical systems in the future wireless networks, such as the J-user multi-input multioutput multiple access channel and the broadcast channel under Markov fading with cooperation and admission control as special cases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.