Abstract
Routing delivery vehicles to serve customers in dynamic and uncertain environments like dense city centers is a challenging task that requires robustness and flexibility. Most existing approaches to routing problems produce solutions offline in the form of plans, which only apply to the situation they have been optimized for. Instead, we propose to learn a policy that provides decision rules to build the routes from online measurements of the environment state, including the customers configuration itself. Doing so, we can generalize from past experiences and quickly provide decision rules for new instances of the problem without re-optimizing any parameters of our policy. The difficulty with this approach comes from the complexity to represent this state. In this paper, we introduce a sequential multi-agent decision-making model to formalize the description and the temporal evolution of a Dynamic and Stochastic Vehicle Routing Problem. We propose a variation of Deep Neural Network using Attention Mechanisms to learn generalizable representation of the state and output online decision rules adapted to dynamic and stochastic information. Using artificially-generated data, we show promising results in these dynamic and stochastic environments, while staying competitive in deterministic ones compared to offline classical heuristics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Intelligent Transportation Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.