In the context of urban informatization, meeting the stringent requirements of emergency communication presents a significant challenge for Urban Emergency Communication Networks (UECNs). Mobile ad hoc networks deployed in these environments often experience node degradation and link disruptions due to the complex urban landscape, leading to frequent communication failures. This paper introduces a novel resilient routing strategy, termed Deep Reinforcement Learning-based Resilient Routing (DRLRR). The proposed routing strategy first utilizes node and link state information to accurately characterize dynamic changes in network topology. The routing decision-making process is then formalized as a Markov decision process, integrating multiple performance metrics into a reward function tailored for the specific demands of urban emergency communications. By leveraging deep reinforcement learning, DRLRR effectively adapts to the complexities of urban environment, enabling intelligent and optimal route selection during network topology fluctuations to ensure seamless data transmission during emergencies. Comparative simulations conducted using NS3(Network simulator 3) demonstrate that DRLRR significantly outperforms three other routing protocols, achieving notable improvements in packet delivery rate, average end-to-end delay, and throughput, thus fulfilling the requirements for reliable and consistent communication in urban emergency scenarios.
Read full abstract