Abstract

Container drayage plays a critical role in intermodal global container transportation, as it accomplishes the first- and last-mile shipment of containers. A container drayage operator dispatches a set of tractors and a set of trailers to transport containers within a local area. An important aspect of the operations is that the arrival times of service requests are uncertain, which means that the operator should respond to the requests dynamically. Moreover, since customers usually impose time windows on container pickup and delivery, it would be important to exploit the service flexibilities of requests when allocating resources in order to enhance the resource efficiency. In this paper, we study a dynamic container drayage problem that arises from the practical operations of container drayage. We develop a Markov decision process (MDP) model for the problem to capture the dynamic interactions between the drayage operator and the uncertain environment. For solving the MDP model, we propose a novel integrated reinforcement learning and integer programming method, in which reinforcement learning enables real-time responses to requests by determining whether each request should be served immediately upon arrival or be held for a period of time, while integer programming plans resource allocation periodically for serving the accrued requests. The proposed method aims to identify a fleet management policy that exploits requests’ service flexibilities to maximize the operator’s service capacity and profitability. We also evaluate the performance of the proposed method on instances generated from the operational data of a container drayage operator in Singapore.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call