Abstract

Urban society relies heavily on critical infrastructure (CI) such as power and water systems. The anticipated prosperity and the national security of society depend on the ability to understand, measure and analyse the vulnerabilities and interdependencies of this system of infrastructures. Only then can emergency responders (ER) react quickly and effectively to any major disruption that the system might face. In this paper, we propose a model to train a reinforcement learning (RL) agent that is able to optimise resource usage following an infrastructure disruption. The novelty of our approach is the use of dynamic programming techniques to build an agent that is able to learn from experience, where the experience is generated by a simulator. The goal of the agent is to maximise an output, which in our case is the number of discharged patients (DP) from hospitals or on-site emergency units. We show that by exposing such an intelligent agent to a large sequence of simulated disaster scenarios, we can capture enough experience to enable the agent to make informed decisions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call