Abstract

In this paper we propose a self-organized method for Intercell Interference Coordination (ICIC) between femto and macro layers. We consider the challenging situation where femtocells are completely autonomous, i.e. they do not receive feedback from the macro network. The absence of a macro to femto interface is compliant with 3GPP Releases 10. We propose a distributed learning approach, based on Reinforcement Learning (RL), for environments characterized by partial information, due to the lack of communication between femtos and macros. The theory behind this approach is funded in the Partially Observable Markov Decision Process (POMDP). The POMDP requires to construct a set of beliefs about the environment. These beliefs are constructed following the spatial Interpolation theory, which allows to estimate the interference perceived by the macrousers. Simulation results show that, through the proposed methodology, femtocells can autonomously learn a transmission power policy to manage the aggregated interference at macrousers. Performances are compared to the complete information situation, which is compliant with the status of Release 11.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.