As a novel urban delivery approach, the coordinated operation of a truck–drone pair has gained increasing popularity, where the truck takes a traveling salesman route and the drone launches from the truck to deliver packages to nearby customers. Previous studies have referred to this problem as the flying sidekick traveling salesman problem (FSTSP) and have proposed numerous algorithms to solve it. However, few studies have considered the stochasticity of the travel time on the road network, mainly caused by traffic congestion, harsh weather conditions, etc, which heavily impacts the speed of the truck, thus affecting the drone’s operations and overall delivery routine. In this study, we extend the FSTSP with stochastic travel times and formulate the problem into a Markov decision process (MDP). The model is solved using reinforcement learning (RL) algorithms including the deep Q-network (DQN) and the Advantage Actor-Critic (A2C) algorithm to overcome the curse of dimensionality. Using an artificially generated dataset that was widely accepted as benchmarks in the literature, we show that the reinforcement learning algorithms also perform well as approximate optimization algorithms, outperforming a mixed integer programming (MIP) model and a local search heuristic algorithm on the original FSTSP without the stochastic travel time. On the FSTSP with stochastic travel time, the reinforcement learning algorithms obtain flexible policies that make dynamic decisions based on different traffic conditions on the roads, saving up to 28.65% on delivery time compared with the MIP model and a dynamic local search (DLS) algorithm. We also conduct a case study using real-time traffic data collected in a middle-sized city in the U.S. using Google Map API. Compared with a benchmark calculated by the DLS, the DRL approach saves 32.68% total delivery time in the case study, showing great potential for future practical adoption.
Read full abstract