Abstract

Improving a country's emergency medical services results in serving more calls on time and saving more lives in return. The ambulance redeployment problem, which is a part of the 112 Emergency Medical system in Turkey, consists of many methods that aim to redeploy ambulances in a way to minimize arrival times to calls. In this study, unlike many methods in the redeployment literature, ambulances are redeployed by a multiarmed bandit (MAB) algorithm. Using OpenStreetMap (OSM), a graph model that consists of 2400 nodes and bi-directional edges is constructed as a simplified map of Ankara for ambulance redeployment. Call distributions and travel times between the nodes are not known by the MAB algorithm beforehand and learned on the way. This learning process takes place via a mechanism called exploration and exploitation. The MAB algorithm is compared against a well-known dynamic redeployment optimization model called DMEXCLP. Two criteria are considered when comparing the performance of the algorithms during simulation: 1) the average arrival times and 2) the percentage of calls responded under 15 minutes. In conclusion, it is shown that under the same conditions the MAB algorithm performs better than the DMEXCLP model in terms of the two criteria.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call