Abstract

AbstractIn unmanned aerial vehicle ad‐hoc network (UANET), the node speed of unmanned aerial vehicles (UAVs) may reach up to 400 km/h. The fast or slow movement of UAV nodes leads to different speeds of topology change of the nodes. Traditional optimized link state routing (OLSR) protocol cannot adaptively adjust the routing update period when the network topology changes, which may lead to the nodes calculating incorrect routing tables. This increases the average end‐to‐end delay and packet loss rate for packet transmission. To enhance the adaptability of OLSR routing protocol to network topology changes, this paper proposes a multi‐agent independent deep deterministic policy gradient‐OLSR (MA‐IDDPG‐OLSR) routing protocol based on distributed multi‐agent reinforcement learning. The protocol deploys DDPG algorithm on each UAV node, and each UAV node adaptively adjusts the Hello and TC message sending intervals, according to the one‐hop neighbouring nodes as well as its own state. Simulation results show that the proposed protocol is able to improve the throughput and reduce the packet loss rate as compared to traditional AODV, GRP, OLSR, and distributed multiple‐agent independent proximal policy optimization‐OLSR (MA‐IPPO‐OLSR), distributed multiple‐agent independent twin delayed deep deterministic policy gradient‐OLSR (MA‐ITD3‐OLSR) routing protocols. Since MA‐IDDPG‐OLSR relies only on local information, there is a minor performance degradation in MA‐IDDPG‐OLSR compared to centralized single‐agent DQN‐OLSR routing protocol. But it is more suitable to a completely distributed UAV network without a centralized node.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call