Abstract

In recent years, with the development of Internet of Things technology, the demand for low-power wireless communication technology has been growing, giving rise to LoRa technology. A LoRa network mainly consists of terminal nodes, gateways, and LoRa network servers. As LoRa networks often deploy many terminal node devices for environmental sensing, the limited resources of LoRa technology, the explosive growth in the number of nodes, and the ever-changing complex environment pose unprecedented challenges for the performance of the LoRa network. Although some research has already addressed the challenges by allocating channels to the LoRa network, the impact of complex and changing environmental factors on the LoRa network has yet to be considered. Reasonable channel allocation should be tailored to the situation and should face different environments and network distribution conditions through continuous adaptive learning to obtain the corresponding allocation strategy. Secondly, most of the current research only focuses on the channel adjustment of the LoRa node itself. Still, it does not consider the indirect impact of the node’s allocation on the entire network. The Inner External DQN SF allocation method (IEDQN) proposed in this paper improves the packet reception rate of the whole system by using reinforcement learning methods for adaptive learning of the environment. It considers the impact on the entire network of the current node parameter configuration through nested reinforcement learning for further optimization to optimize the whole network’s performance. Finally, this paper evaluates the performance of IEDQN through simulation. The experimental results show that the IEDQN method optimizes network performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call