Abstract

The cornerstone of port production operations is ship handling, necessitating judicious allocation of diverse production resources to enhance the efficiency of loading and unloading operations. This paper introduces an optimisation method based on deep reinforcement learning to schedule berths and yards at a bulk cargo terminal. A Markov Decision Process model is formulated by analysing scheduling processes and unloading operations in bulk port imports business. The study presents an enhanced reinforcement learning algorithm called PS-D3QN (Prioritised Experience Replay and Softmax strategy-based Dueling Double Deep Q-Network), amalgamating the strengths of the Double DQN and Dueling DQN algorithms. The proposed solution is evaluated using actual port data and benchmarked against the other two algorithms mentioned in this paper. The numerical experiments and comparative analysis substantiate that the PS-D3QN algorithm significantly enhances the efficiency of berth and yard scheduling in bulk terminals, reduces the cost of port operation, and eliminates errors associated with manual scheduling. The algorithm presented in this paper can be tailored to address scheduling issues in the fields of production and manufacturing with suitable adjustments, including problems like the job shop scheduling problem and its extensions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.