Abstract

The Optimized Link State Routing Protocol is a popular proactive routing protocol used in wireless mesh networks. However, like many routing protocols, OLSR can suffer from inefficiencies and suboptimal performance in certain network conditions. To address these issues, researchers have proposed using reinforcement learning algorithms to improve the routing decisions made by OLSR. This paper explores the use of three RL algorithms - Q-Learning, SARSA, and DQN - to improve the performance of OLSR. Each algorithm is described in detail, and their application to OLSR is explained. In particular, the network is represented as a Markov decision process, where each node is a state, and each link between nodes is an action. The reward for taking an action is determined by the quality of the link, and the goal is to maximize the cumulative reward over a sequence of actions. Q-Learning is a simple and effective algorithm that estimates the value of each possible action in a given state. SARSA is a similar algorithm that takes into account the current policy when estimating the value of each action. DQN uses a neural network to approximate the Q-values of each action in a given state, providing more accurate estimates in complex network environments. Overall, all three RL algorithms can be used to improve the routing decisions made by OLSR. This paper provides a comprehensive overview of the application of RL algorithms to OLSR and highlights the potential benefits of using these algorithms to improve the performance of wireless networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call