Abstract

Irrespective of whether the environment is wired or wireless, routing is an important challenge in networks. Since mobile ad hoc networks (MANETs) are flexible and decentralized wireless networks, routing is very difficult. Furthermore, malicious nodes existing in the MANET can damage the routing performance of the network. Recently, reinforcement learning has been proposed to address these problems. Being a reinforcement learning algorithm, the Q-learning mechanism is suitable for an opportunistic routing approach because it not only adapts to changing networks, but also mitigates the effect of malicious nodes on packet transmission. In this study, we propose a new reinforcement learning routing protocol for MANETs called reputation opportunistic routing based on Q-learning (RORQ). Using this protocol, which works based on game theory, a reputation system can detect and exclude malicious nodes in a network for efficient routing. Thus, our method can find a routing path more effectively in an environment attacked by malicious nodes. The simulation results showed that the proposed method could achieve superior routing performance compared with other state-of-the-art routing protocols. In addition, compared to other algorithms, the proposed method demonstrated gains of up to 55% in terms of packet loss, up to 82% in terms of average end-to-end delay, and up to 28% in terms of energy efficiency in the blackhole attack scenario and up to 73% in terms of packet loss, up to 35% in terms of average end-to-end delay, and up to 12% in terms of energy efficiency in the gray hole attack scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call