Abstract
Due to the uncertainties in the project execution process, the original plan often cannot be carried out correctly and needs to be rescheduled to repair the plan. In this case, rescheduling is required to repair the plan. Priority rules are the most common method for rescheduling because of their known advantages such as simplicity and fast. Although numerous papers have conducted comparative studies on different priority rules, managers often do not know which rules should be used for project rescheduling in specific situations. In this paper, we propose a reinforcement learning based approach for adaptive selection of priority rules in dynamic environments, which includes off-line phase and on-line phase. Reinforcement learning is used to learn scheduling knowledge and obtain the scheduling model in the off-line phase. Transfer learning can be used to reuse scheduling models between different cases in this phase. In the online phase, the scheduling model is used to adaptively select appropriate rules for rescheduling when the initial plan is infeasible due to unexpected disturbance. Experiments show that the proposed method has better rescheduling performance than other heuristic algorithms based on priority rules under different disturbances. Besides, we find that the time consumption of off-line training can be greatly reduced by using transfer learning, which also proves that our method can indeed learn some essential scheduling knowledge.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.