Abstract

One of the most complex challenges that wireless communication systems will face in the coming years is the management of the radio resource. In the next years, the growth of mobile devices, forecast (CISCO, 2020), will lead to the coexistence of about 8.8 billion mobile devices with a growing trend for the following years. This scenario makes the reuse of the radio resource particularly critical, which for its part will not undergo significant changes in terms of bandwidth availability. One of the biggest problems to be faced will be to identify solutions that optimize its use. This work shows how a combined approach of a Reinforcement Learning model and a Supervised Learning model (Multi-Layer Perceptron) can provide good performance in the prediction of the channel behavior and on the overall performance of the transmission chain, even for Cognitive Radio with limited computational power, such as NB-IoT, LoRaWan, Sigfox.

Highlights

  • The current communication networks are rather complex dynamic systems; on the other hand, the simulation tools we have, to estimate the behavior of these architectures are based on simplified models that are often unable to reproduce the interaction of the multiple components involved such as the presence of interferers and phenomena such fading, moving obstacles, atmospheric events and last but not least the characteristics of the surrounding environment that can have a negative impact on the parameters of our system, such as frequency, amplitude, delay, etc

  • We introduce the concept of Cognitive Radio, we will show a Supervised Learning model, applied to an indoor context in which the system is able to predict the behavior of the channel inside the premises and to adapt some transmission parameters to guarantee a constant Bit Error Ratio (BER) value

  • Referring to the precious work done by (Gawłowicz and Zubow, 2019) in which it is proposed to combine the two simulation tools Network Simulator (NS-3) and OpenAI-Gym we present an optimized Q-Learning algorithm, which allows the agent to predict the behavior of the environment when sudden interference occurs in the system and implementing the correct policy, in an Unsupervised Learning set

Read more

Summary

Introduction

The current communication networks are rather complex dynamic systems; on the other hand, the simulation tools we have, to estimate the behavior of these architectures are based on simplified models that are often unable to reproduce the interaction of the multiple components involved such as the presence of interferers and phenomena such fading, moving obstacles, atmospheric events and last but not least the characteristics of the surrounding environment that can have a negative impact on the parameters of our system, such as frequency, amplitude, delay, etc. The original contribution of this paper is the following: By appropriately combining two Machine Learning methodologies, it is possible to predict the behavior of the radio channel with a low computational cost, making this approach suitable for application in environments where terminals have limited computing capacity, as in IoT systems, LoRaWan and Sigfox. This translates into longer battery life and the possibility of increasing the number of terminals in the area served by a single node

Objectives
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.