Abstract

In opportunistic networks, the path connecting two nodes is not continuous at any time instant. In such an environment, routing is an extremely taxing word owing to the ever-changing nature of the network and random connections between nodes. Routing in such networks is done by a store carry forward mechanism, in which local information is used to make opportunistic routing decisions. In this study, the authors present a novel dynamic and intelligent self-learning routing protocol that is an improvement of the history-based routing protocol for opportunistic (HiBOp) networks. The proposed method presents a novel solution for the estimation of average latency between any two nodes, which is used along with reinforcement learning to dynamically learn the nodes' interactions. Simulation results on a real mobility trace (INFOCOM 2006) show that latency-aware reinforced routing for opportunistic network applied to HiBOp outperforms the original HiBOp protocol by 14.4% in terms of delivery probability, 15% in terms of average latency and 34.7% in terms of overhead ratio.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call