Abstract

In this paper, a decentralized and self-organizing mechanism for small cell networks (such as micro-, femto- and picocells) is proposed. In particular, an application to the case in which small cell networks aim to mitigate the interference caused to the macrocell network, while maximizing their own spectral efficiencies, is presented. The proposed mechanism is based on new notions of reinforcement learning (RL) through which small cells jointly estimate their time-average performance and optimize their probability distributions with which they judiciously choose their transmit configurations. Here, a minimum signal to interference plus noise ratio (SINR) is guaranteed at the macrocell user equipment (UE), while the small cells maximize their individual performances. The proposed RL procedure is fully distributed as every small cell base station requires only an observation of its instantaneous performance which can be obtained from its UE. Furthermore, it is shown that the proposed mechanism always converges to an epsilon Nash equilibrium when all small cells share the same interest. In addition, this mechanism is shown to possess better convergence properties and incur less overhead than existing techniques such as best response dynamics, fictitious play or classical RL. Finally, numerical results are given to validate the theoretical findings, highlighting the inherent tradeoffs facing small cells, namely exploration/exploitation, myopic/foresighted behavior and complete/incomplete information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call