Abstract

As one of the key technologies of 5G, massive IoT environments provide the ubiquitous IoT services. Compared with 4G, its structure is more complex, and it has a large number of deployed nodes. If a failure occurs and can't be alleviated its effect in time, it will lead to a significant drop in network performance. Therefore, the cell outage compensation (COC) problem in massive IoT environments is very important. Although deep reinforcement learning (DRL) has been applied to many scenarios related to the self-organizing network (SON), there are fewer applications for cell outage compensation. In this paper, aiming at the cell outage scenario in massive IoT environments with the goal of maximizing the connectivity of base stations while meeting service quality demands of each compensation user, we present a framework based on DRL to solve it. Specifically, we first allocate compensation users to adjacent BSs by using the K-means clustering algorithm, then use DQN to find the antenna downtilt and the power allocated to compensation users. The simulation result shows that the algorithm converges quickly and tends to be stable, and reach 95% of the maximum target value. It verifies the efficiency of the DRL-based framework and its effectiveness in meeting user requirements and handling cell outage compensation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call