Abstract

Device-to-device (D2D) communication is defined as the direct communication between two D2D user equipments (DUEs) without traversing the evolved NodeB of 5G networks. In the underlay mode of resource reuse, DUEs and cellular user equipments share resource blocks to improve system throughput by reusing the spectrum. In order to further enhance the performance, an extended version of reinforcement learning algorithm, Multi-Player Multi-Armed Bandit, is employed to control the transmission power of the DUEs to reduce the interference induced by resource sharing. Three learning strategies, namely Epsilon-first, Epsilon-greedy, Upper-Confidence-Bound, are applied. Simulation results show that the proposed method improves performance in terms of the average transmission power of D2D pairs, the ratio of unallocated D2D pairs, energy efficiency, and total throughput.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call