Abstract

To ensure a seamless mobility of users in the scenario with flying base stations (FlyBSs) and static ground base stations (GBSs), an efficient handover mechanism is required. In this paper, we introduce new framework simultaneously managing cell individual offset (CIO) for handover of both FlyBSs and mobile users. Our objective is to maximize capacity of the mobile users while considering also a cost of handover to reflect potential excessive signaling and energy consumption due to redundant handovers. This problem is of a very high complexity for conventional optimization methods and optimal solution would require knowledge of information commonly not available to the mobile network. Hence, we adjust the CIO of FlyBSs and GBSs via reinforcement learning. First, we adopt Q-learning to solve the problem. Due to practical limitations implied by a large Q-table, we also propose Q-learning with approximated Q-table. Still, for larger networks, even the approximated Q-table can require a large storage and computation time. Therefore, we apply also actor-critic-based deep reinforcement learning. Simulation results demonstrate that all three proposed algorithms converge promptly and increase the communication capacity by dozens of percent while the handover failure ratio and the handover ping-pong ratio are reduced multiple times compared to state-of-the-art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call