Abstract

Emergency medical service (EMS) systems have two main goals when sending ambulances to patients: rapidly responding to patients and sending the right type of personnel to patients based on their health needs. We address these issues by formulating and studying a Markov decision process model that determines which type of ambulances (servers) to send to patients in real-time. The base model considers a loss system over a finite time horizon, and we provide a model variant that considers an infinite time horizon and the average reward criterion. Structural properties of the optimal policies are derived. Computational experiments using a real-world EMS dataset show that the optimal policies inform how to dynamically dispatch ambulance types to patients. We propose and evaluate three classes of heuristics, including a static constant threshold heuristic, a greedy heuristic, and a dynamic greedy threshold heuristic. Computational results suggest that the greedy threshold heuristic closely approximates the optimal policies and reduces the complexity of implementing dynamic policies in real settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call