Abstract

This paper investigates the problem of distributed resource sharing in a device-to-device enabled heterogeneous network, where the various device pairs choose their transmission channels, modes, base stations (BSs), and power levels without any control by the BSs based only on the locally-observable information. This problem is represented as a Bayesian coalition formation game, where the players (device pairs) create coalitions to maximize their long-term rewards with no prior knowledge of the values of potential coalitions and the types of their members. To minimize these uncertainties, a novel Bayesian reinforcement learning (RL) model is derived. In this model, the players update (through repeated coalition formation) their beliefs about the types and coalitional values to reach a stable coalitional agreement. The proposed Bayesian RL-based coalition formation algorithms are implemented in a long-term evolution advanced network and evaluated using simulations. The algorithms show a superior performance when compared with other relevant resource allocation schemes and achieve near-optimal results after a relatively small number of RL iterations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.