Abstract

This study leverages simulation-optimisation with a Reinforcement Learning (RL) model to analyse the routing behaviour of delivery vehicles (DVs). We conceptualise the system as a stochastic k-armed bandit problem, representing a sequential interaction between a learner (the DV) and its surrounding environment. Each DV is assigned a random number of customers and an initial delivery route. If a loading zone is unavailable, the RL model is used to select a delivery strategy, thereby modifying its route accordingly. The penalty is gauged by the additional trucking and walking time incurred compared to the originally planned route. Our methodology is tested on a simulated network featuring realistic traffic conditions and a fleet of DVs employing four distinct lastmile delivery strategies. The results of our numerical experiments underscore the advantages of providing DVs with an RL-based decision support system for en-route decision-making, yielding benefits to the overall efficiency of the transport network. Highlights Combining simulation and optimisation algorithms with reinforcement learning Model DVs en-route parking decisions with a k-armed bandit algorithm Evaluating the impacts of delivery strategies on traffic congestion and in last-mile delivery efficiency

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.