Abstract

This paper determines optimal strategies for transmitting messages in a mobile ad-hoc network (MANET) in a communications-limited and lossy communications environment. When an agent generates or receives a message it must decide to which neighbors, and how many times, that message is to be passed. The opposing goals are to (1) propagate all messages throughout the MANET quickly and (2) to minimize the total number of messages sent. We compare two optimized decision strategies for the agents, reinforcement learning (RL) and game theory (GT) methods. For the RL framework, each node in the MANET acts as a reinforcement learning agent who must learn optimal decisions for when to send messages to whom. For the GT framework, we create a game tree where the nodes encompass message knowledge and connectivity information, and the decision branches represent sending messages to neighbors. We solve the game using a Monte-Carlo Tree Search (MCTS) variation to determine the probability that a message is sent to a neighbor. Performance is assessed in terms of the total number of messages sent, and the length of time for a given percentage of messages to reach a given percentage of nodes. Experiments with MANETs of varying size and connectivity are considered, and the RL and GT performance and training speed are compared. The decision strategies are domain agnostic and may be applied to ground, air, surface, sub-surface, or satellite networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call