Abstract

Optimum localization problem, which has a wide range of application areas in real life such as emergency services, command and control systems, warehouse localization, shipment planning, aims to find the best location to minimize the arrival, response or return time which might be vital in some applications. In most of the cases, uncertainty in traffic is the most challenging issue and in the literature generally it is assumed to obey a priori known stochastic distribution. In this study, problem is defined as the optimum localization of ambulances for emergency services and traffic is modeled to be Markovian to generate context data. Unlike the solution methods in the literature, there exists no mutual information transfer between the model and solution of the problem; thus, a contextual multi-armed bandit learner tries to determine the underlying traffic with simple assumptions. The performance of the bandit algorithm is compared with the performance of a classical estimation method in order to show the effectiveness of the learning approach on the solution of the optimum localization problem.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.