Abstract

Wireless sensor networks have become increasingly popular in recent years due to the growing demand for Internet of Things (IoT) applications, including LoRaWAN networks. However, the effective allocation of resources remains a crucial challenge in LoRaWAN networks due to the limited bandwidth and the diverse demands for multiple services. This paper presents three novel resource allocation solutions for LoRaWAN network slicing to address this challenge. These solutions are based on the Multi-Armed Bandit (MAB) algorithm, which is known for balancing the exploration of available actions with the exploitation of optimal decisions. Our objective is to dynamically and efficiently allocate resources to network slices by treating the resource allocation as a MAB problem. This approach aims to maximize Packet Delivery Rate (PDR) performance while ensuring each service’s Service Level Agreement (SLA). The first solution, UCB-MAB, uses the Upper Confidence Bound (UCB) strategy to balance exploration and exploitation to improve network performance. The second solution, Q-UCB-MAB, continuously updates Q-values using the Q-learning update equation and incorporates the UCB strategy for further optimization. Finally, the third solution, ARIMA-UCB-MAB, leverages the predicted reward value from the Autoregressive Integrated Moving Average (ARIMA) model within the UCB framework to enhance network performance. Our results demonstrate that all three solutions offer efficient resource allocation in terms of PDR and SLA satisfaction. Specifically, the ARIMA-UCB-MAB solution outperforms the two other solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call