Abstract

We consider scheduling over a wireless system in which the channel state information is not available a priori to the scheduler but can be inferred from past history. Specifically, the wireless system is modeled as a network of parallel queues. We assume that the channel state of each queue evolves stochastically as an independent on/off Markov chain. The scheduler, which is aware of the queue lengths but is ignorant of the channel states, has to choose at most one queue at a time for transmission. The scheduler has no information regarding the current channel states but can estimate them from the acknowledgment history. We first characterize the capacity region of the system using tools from the theory of Markov decision processes (MDPs). Specifically, we prove that the capacity region boundary is the uniform limit of a sequence of linear programming (LP) solutions. Next, we combine the LP solution with a queue-length-based scheduling mechanism that operates over long frames to obtain a throughput optimal policy for the system. By incorporating results from MDP theory within the Lyapunov-stability framework, we show that our frame-based policy stabilizes the system for all arrival rates that lie in the interior of the capacity region.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.