Abstract

We investigate the problem of resource block (RB) and power allocation jointly and in a distributed manner using game theoretic learning solutions, in an underlay device-to-device network where device pairs communicate directly with each other by reusing the spectrum allocated to the cellular users. We formulate the joint RB and power allocation as multi-agent learning problems with discrete strategy sets; and suggest partially distributed and fully distributed learning algorithms to determine the RB and power level to be used by each device pair. The partially distributed algorithms, viz., Fictitious Play and its variant Fading Memory Joint Strategy Fictitious Play with Inertia, achieve Nash Equilibrium (NE) of the sum-rate maximization game in a static wireless environment. The completely distributed and uncoupled Stochastic Learning Algorithm converges to pure strategy NE of the interference mitigation game in a time-varying radio environment. We provide proofs for the existence of NE and convergence of the learning algorithms to the NE. Performance of the proposed schemes are evaluated in log-normal, Rayleigh and Nakagami fading environments and compared with an existing hybrid scheme and a centralized scheme. The simulation results show that the partially distributed schemes give the same performance as the centralized scheme, and the fully distributed scheme gives similar performance as the hybrid scheme but with much reduced signaling and computation overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call