Abstract

With the ubiquitous growth of Internet-of-things (IoT) devices, current low-power wide-area network (LPWAN) technologies will inevitably face performance degradation due to congestion and interference. The rule-based approaches to assign and adapt the device parameters are insufficient in dynamic massive IoT scenarios. For example, the adaptive data rate (ADR) algorithm in LoRaWAN has been proven inefficient and outdated for large-scale IoT networks. Meanwhile, new solutions involving machine learning (ML) and reinforcement learning (RL) techniques are shown to be very effective in solving resource allocation in dense IoT networks. In this article, we propose a new concept of using two independent learning approaches for allocating spreading factor (SF) and transmission power to the devices using a combination of a decentralized and centralized approach. SF is allocated to the devices using RL for contextual bandit problem, while transmission power is assigned centrally by treating it as a supervised ML problem. We compare our approach with existing state-of-the-art algorithms, showing a significant improvement in both network level goodput and energy consumption, especially for large and highly congested networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call