This article explores the convergence of artificial intelligence and its challenges for precise planning of LoRa networks. It examines machine learning algorithms in conjunction with empirically collected data to develop an effective propagation model for LoRaWAN. We propose decoupling feature extraction and regression analysis, which facilitates training data requirements. In our comparative analysis, decision-tree-based gradient boosting achieved the lowest root-mean-squared error of 5.53 dBm. Another advantage of this model is its interpretability, which is exploited to qualitatively observe the governing propagation mechanisms. This approach provides a unique opportunity to practically understand the dependence of signal strength on other variables. The analysis revealed a 1.5 dBm sensitivity improvement as the LoR’s spreading factor changed from 7 to 12. The impact of clutter was revealed to be highly non-linear, with high attenuations as clutter increased until a certain point, after which it became ineffective. The outcome of this work leads to a more accurate estimation and a better understanding of the LoRa’s propagation. Consequently, mitigating the challenges associated with large-scale and dense LoRaWAN deployments, enabling improved link budget analysis, interference management, quality of service, scalability, and energy efficiency of Internet of Things networks.
Read full abstract