Abstract

In the resource levelling problem (RLP) under uncertainty, existing studies focus on obtaining an open-loop activity list that is not updated during project execution. In project management practice, it is also necessary to address more situations, such as activity overlaps and resource breakdowns. In this paper, we extend the uncertain RLP by proposing a resource levelling problem with multiple uncertainties (RLP-MU) that simultaneously considers uncertainties in activity durations, activity overlaps and resource availabilities. We formulate the RLP-MU as a Markov decision process model. Aimed at levelling resource usage by dynamically scheduling activities at each decision point based on the observed information, we develop a hybrid open–closed-loop approximate dynamic programming algorithm (HOC-ADP). In the HOC-ADP, we devise a closed-loop rollout policy to approximate the cost-to-go function and use the concept of the average project to avoid time-consuming simulation. A greedy-decoding-based estimation of distributed algorithm is also devised to construct an open-loop policy that is embedded in the HOC-ADP to further improve it. We additionally develop a simulation algorithm to evaluate the resource levelling performance of the HOC-ADP. Computational experiments on a benchmark dataset consisting of 540 problem instances are conducted to analyze the performance of the HOC-ADP, and the impact of various factors on resource levelling are investigated. The comparison experimental results indicate that our HOC-ADP outperforms the state-of-the-art meta-heuristics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call