Abstract

This study focuses on optimizing resource allocation problems in complex dynamic environments, specifically vehicle dispatching in closed bipartite queuing networks. We present a novel curriculum-driven reinforcement learning (RL) approach that seamlessly incorporates domain knowledge and environmental feedback, effectively addressing the challenges associated with sparse reward scenarios in RL applications. This approach involves a scalable reinforcement learning framework for dynamic vehicle fleet size. We design dense artificial rewards using domain knowledge and incorporate artificial action–reward pairs into the original experience sequence forming the basic structure of the training instances. A difficulty momentum boosting strategy is proposed to produce a series of training instances with progressively increasing difficulty, ensuring that the RL agent learns decision strategies in an organized and smooth manner. Experimental results demonstrate that the proposed method significantly surpasses existing approaches in enhancing productivity and model learning efficiency for transport tasks in open-pit mines, while confirming the superiority of a flexible and automated curriculum learning process over a rigid setting. This approach has vast potential for application in dynamic resource allocation problems across industries, such as manufacturing and logistics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.