Abstract

This work proposes a reinforcement learning-based dynamic routing protocol that is well suited to disaster-prone areas. Effective disaster management is always required to save the lives of those stuck in crisis situations, however when disaster strikes, the rescue team's infrastructural support is no longer available. In such a setting, ad hoc networks can readily deploy. The disaster area mobility concept is used to communicate between citizens and rescue teams. A dynamic routing system is also required to deal with high node mobility and frequent link failure in a network. The quality of communication among parties involved in preserving people's lives is judged using performance parameters such as latency and energy. In this research, three distinct reinforcement learning models are used to evaluate routing protocols

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call