Abstract

Emergency medical service (EMS) systems have two main goals when sending ambulances to patients: rapidly responding to patients and sending the right type of personnel to patients based on their health needs. We address these issues by formulating and studying a Markov decision process model that determines which type of ambulances (servers) to send to patients in real-time. The base model considers a loss system over a finite time horizon, and we provide a model variant that considers an infinite time horizon and the average reward criterion. Structural properties of the optimal policies are derived. Computational experiments using a real-world EMS dataset show that the optimal policies inform how to dynamically dispatch ambulance types to patients. We propose and evaluate three classes of heuristics, including a static constant threshold heuristic, a greedy heuristic, and a dynamic greedy threshold heuristic. Computational results suggest that the greedy threshold heuristic closely approximates the optimal policies and reduces the complexity of implementing dynamic policies in real settings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.