Abstract

This paper investigates the problem of distributed channel selection for interference mitigation in cognitive radio networks (CRNs) with block-fading channels, using a game-theoretic solution. Specifically, the channel gains are blockfixed in a slot and change randomly in the next slot. Existing algorithms, which are originally designed for static channels, can not converge in the presence of time-varying channels. We formulate this problem as a non-cooperative game with random payoffs, in which the utility of each player (CR user) is defined as the expected weighted experienced interference. This game is proved to be a potential game with the network utility, the expected weighted aggregate interference, serving as the potential function. Then, we propose a stochastic learning automata based distributed channel selection algorithm, with which the CR users learn the desirable channel selections from their action-payoff history. It is analytically shown that the proposed learning algorithm converges to pure strategy Nash equilibrium (NE), which maximizes the network utility globally or locally, without information exchange. Moreover, simulation results show that it achieves higher normalized transmission rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call