Abstract

Unmanned aerial vehicles (UAVs) are important equipment for efficiently executing search and rescue missions in disaster or air-crash scenarios. Each node can communicate with the others by a routing protocol in UAV ad hoc networks (UANETs). However, UAV routing protocols are faced with the challenges of high mobility and limited node energy, which hugely lead to unstable link and sparse network topology due to premature node death. Eventually, this severely affects network performance. In order to solve these problems, we proposed the deep-reinforcement-learning-based geographical routing protocol of considering link stability and energy prediction (DSEGR) for UANETs. First of all, we came up with the link stability evaluation indicator and utilized the autoregressive integrated moving average (ARIMA) model to predict the residual energy of neighbor nodes. Then, the packet forward process was modeled as a Markov Decision Process, and according to a deep double Q network with prioritized experience replay to learn the routing-decision process. Meanwhile, a reward function was designed to obtain a better convergence rate, and the analytic hierarchy process (AHP) was used to analyze the weights of the considered factors in the reward function. Finally, to verify the effectiveness of DSEGR, we conducted simulation experiments to analyze network performance. The simulation results demonstrate that our proposed routing protocol remarkably outperforms others in packet delivery ratio and has a faster convergence rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call