Abstract

This paper deals with the problem of aggregated interference generated by multiple cognitive radios (CRs) at the receivers of primary (licensed) users. In particular, we consider a secondary CR system based on the IEEE 802.22 standard for wireless regional area networks (WRANs), and we model it as a multiagent system where the multiple agents are the different secondary base stations in charge of controlling the secondary cells. We propose a form of real-time multiagent reinforcement learning, which is known as decentralized Q-learning, to manage the aggregated interference generated by multiple WRAN systems. We consider both situations of complete and partial information about the environment. By directly interacting with the surrounding environment in a distributed fashion, the multiagent system is able to learn, in the first case, an efficient policy to solve the problem and, in the second case, a reasonably good suboptimal policy. Computational and memory requirement considerations are also presented, discussing two different options for uploading and processing the learning information. Simulation results, which are presented for both the upstream and downstream cases, reveal that the proposed approach is able to fulfill the primary-user interference constraints, without introducing signaling overhead in the system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.