Abstract

With the widespread application of Internet of Things (IoT), its energy consumption is becoming a major concern. Federated Learning (FL) allows multiple low-power devices to collaboratively learn a shared model, which reduces the energy consumption of IoT systems. However, FL involves highly varied data structures and require large amounts of communication. The communication efficiency greatly affects the system performance. In this paper, we optimize communication efficiency in the device access layer. Considering the instability of IoT device network connections, we propose a Deep Reinforcement Learning (DRL) based Efficient Access Scheduling Algorithm (DRL-EASA). It can adapt to changes in IoT device numbers and densities and thus applicable to instantaneous dynamic FL IoT systems. In scale-changed scenarios, DRL-EASA constructs geospatial-oriented state spaces and utilizes learning algorithm to train access scheduling strategies, effectively decoupling device numbers from the algorithm’s network structure. Firstly, the geographic region is divided into grids and the User State Information (USI) is mapped into fixed-dimensional Geographic State Information Vectors (GSIV). Secondly, a Convolutional Neural Network (CNN) is used as an agent to extract interference feature information from GSIV, and the Proximal Policy Optimization (PPO) algorithm is utilized to train the agent. Finally, a random algorithm is employed to assist DRL in generating scheduling decisions, enhancing the algorithm’s generalization performance in high device density scenarios. Numerical results show that our approach optimizes access scheduling strategies for both uplink and downlink and demonstrates effective adaptability to dynamically changing device numbers and exhibits strong generalization performance across various device densities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call