Abstract

This paper addresses the dispatching problem faced by an urban consolidation center. The center receives orders according to a stochastic arrival process and dispatches them in batches for the last-mile distribution. The operator of the center aims to find the cost-minimizing consolidation policy, depending on the orders at hand, preannounced orders, and stochastic arrivals. We present this problem as a variant of the delivery dispatching problem that includes dispatch windows and define a corresponding Markov decision model. Larger instances of the problem suffer from intractably large state-, outcome-, and action spaces. We propose an approximate dynamic programming (ADP) algorithm that can handle such instances, using a linear value function approximation to estimate the downstream costs. To design the value function approximation, we construct various sets of basis functions, numerically evaluate their suitability, and discuss the properties of good basis functions for the dispatching problem. Numerical experiments on toy-sized instances show that the best set of basis functions approximates the optimal values with an error of less than 3%. To cope with large action spaces, we formulate an integer linear program to be used within our ADP algorithm. We evaluate the performance of ADP policies against four benchmark policies: two heuristic policies, a direct cost minimization policy, and a post-decision rollout policy. We test the performance of ADP on a variety of networks. ADP consistently outperforms the benchmark policies, performing particularly well when there is sufficient flexibility in dispatch times. The online appendix is available at https://doi.org/10.1287/trsc.2017.0773 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call