Abstract
Cognitive technology enables licensed users (primary users, PUs) to trade the surplus spectrum and to transfer temporarily spectrum usage right to the unlicensed users (secondary users, SUs) to get some reward. The rented spectrum is used to establish secondary network. However, the rented spectrum size influences the quality of service (QoS) for the PU and the gained rewards. Therefore, the PU needs a resource management scheme that helps it to allocate optimally a given amount of the offered spectrum among multiple service classes and to adapt to changes in the network conditions. The PU should support different classes of SUs that pay different prices for their usage of spectrum. We propose a novel approach to maximize a PU reward and to maintain QoS for the PUs and for the different classes of SUs. These complex contradicting objectives are embedded in our reinforcement learning (RL) model that is developed to derive resource adaptations to changing network conditions, so that PUs' profit can continuously be maximized. Available spectrum is managed by the PU that executes the optimal control policy, which is extracted using RL. Performance evaluation of the proposed RL solution shows that the scheme is able to adapt to different conditions and to guarantee the required QoS for PUs and to maintain the QoS for a multiple classes of SUs, while maximizing PUs profits. The results have shown that cognitive mesh network can support additional SUs traffic while still ensuring PUs QoS. In our model, PUs exchange channels based on the spectrum demand and traffic load. The solution is extended to the case in which there are multiple PUs in the network where a new distributed algorithm is proposed to dynamically manage spectrum allocation among PUs.
Highlights
In conventional spectrum management schemes, spectrum assignment decisions are often static, with spectrum allocated to licensed users (PUs) on a long term basis for large geographical regions
Network overview we present our cognitive wireless mesh network (CWMN) where the secondary network consisting of SUs is overlaid on a PU’s primary network
On-demand spectrum sharing between PUs we show how PUs share free spectrum to maximize the total profits based on the spectrum demand and interference constraint
Summary
In conventional spectrum management schemes, spectrum assignment decisions are often static, with spectrum allocated to licensed users (PUs) on a long term basis for large geographical regions. To overcome spectrum scarcity problem, Federal Communications Commission (FCC) has already started work on the concept of spectrum sharing where SUs can use licensed spectrum if their usage do not harm PUs [1]. PUs are expected to support various kinds of applications defined by their different QoS requirements This need for the generation of networks complicate designing their architecture and protocols. Reinforcement learning (RL) [5], a subfield of artificial intelligence (AI), is an attractive solution for spectrum trading problem in WMNs for a number of reasons It provides a way of finding an optimal solution purely from experience and it requires no specific model of the environment; the learning agent builds up its own environment model by interacting with environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: EURASIP Journal on Wireless Communications and Networking
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.