Abstract
A multi-variable relationship exists in Cognitive Radio Networks (CRNs) where factors such as Energy efficiency, Throughput, Delay and Signal Noise Ratio (SINR) are related. The SINR shows the quality of the signal and is defined as the total power of a specific signal over the total power of an inter signal plus noise. This work proposes an effective energy and delay-efficient channel allocation strategy for CRNs (Cognitive Radio Networks) using Q-Learning and actor-criticism algorithms that maximize rewards. We also propose a Proximal Policy Optimization (PPO) algorithm that uses clipping of surrogate objectives to prevent large policy changes and ensure that the other parameters remain stable over time. We study the tradeoff between rewards, energy efficiency and other parameters and compare the algorithms with respect to the same. Results show that the proposed PPO method, while using optimally increased energy consumption, significantly reduces the delay, improves the thought and reduces the packet loss ratio for efficient channel allocation. This is positive with our findings shown in the results section and by comparing the proposed method with other algorithms to identify improved throughput and channel utilization. As the simulation results indicate that the PPO algorithm has very high throughput and significantly minimizes the delay and packet loss, it is suitable for application in all sorts of services such as video, imaging or M2M. The results are also compared with two of the existing channel allocation schemes and they confirm that the proposed algorithm performs better in terms of throughput discussed in one scheme and channel efficiency in the other.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have