Abstract

This paper considers the fundamental convergence time for opportunistic scheduling over time-varying channels. The channel state probabilities are unknown and algorithms must perform some type of estimation and learning while they make decisions to optimize network utility. Existing schemes can achieve a utility within $\epsilon $ of optimality, for any desired $\epsilon >0$ , with convergence and adaptation times of $O(1/\epsilon ^{2})$ . This paper shows that if the utility function is concave and smooth, then $O(\log (1/\epsilon)/\epsilon)$ convergence time is possible via an existing stochastic variation on the Frank-Wolfe algorithm, called the RUN algorithm. Furthermore, a converse result is proven to show it is impossible for any algorithm to have convergence time better than $O(1/\epsilon)$ , provided the algorithm has no a-priori knowledge of channel state probabilities. Hence, RUN is within a logarithmic factor of convergence time optimality. However, RUN has a vanishing stepsize and hence has an infinite adaptation time. Using stochastic Frank-Wolfe with a fixed stepsize yields improved $O(1/\epsilon ^{2})$ adaptation time, but convergence time increases to $O(1/\epsilon ^{2})$ , similar to existing drift-plus-penalty based algorithms. This raises important open questions regarding optimal adaptation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call