Abstract

A real-time cognitive radio network (CRN) testbed is implemented by using the universal software radio peripheral (USRP) and GNU Radio to demonstrate the use of reinforcement learning and transfer learning schemes for spectrum handoff decisions. By considering the channel status (idle or occupied) and channel condition (in terms of packet error rate), the sender node performs the learning-based spectrum handoff. In reinforcement learning, the number of network observations required to achieve the optimal decisions is often prohibitively high, due to the complex CRN environment. When a node experiences new channel conditions, the learning process is restarted from scratch even when the similar channel condition has been experienced before. To alleviate this issue, a transfer learning based spectrum handoff scheme is implemented, which enables a node to learn from its neighboring node(s) to improve its performance. In transfer learning, the node searches for an expert node in the network. If an expert node is found, the node requests the Q-table from the expert node for making its spectrum handoff decisions. If an expert node cannot be found, the node learns the spectrum handoff strategy on its own by using the reinforcement learning. Our experimental results demonstrate that the machine learning based spectrum handoff performs better in the long term and effectively utilizes the available spectrum. In addition, the transfer learning requires less number of packet transmissions to achieve an optimal solution, compared to the reinforcement learning.11DISTRIBUTION STATEMENT A: Approved for Public Release; distribution unlimited 88ABW-2017–6274; 13 Dec 2017.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call