Abstract

We consider the $$P_m || C_{\max }$$ scheduling problem where the goal is to schedule n jobs on m identical parallel machines $$(m < n)$$ to minimize makespan. We revisit the famous Longest Processing Time (LPT) rule proposed by Graham in 1969. LPT requires to sort jobs in non-ascending order of processing times and then to assign one job at a time to the machine whose load is smallest so far. We provide new insights into LPT and discuss the approximation ratio of a modification of LPT that improves Graham’s bound from $$\left( \frac{4}{3} - \frac{1}{3m} \right) $$ to $$\left( \frac{4}{3} - \frac{1}{3(m-1)} \right) $$ for $$m \ge 3$$ and from $$\frac{7}{6}$$ to $$\frac{9}{8}$$ for $$m=2$$. We use linear programming to analyze the approximation ratio of our approach. This performance analysis can be seen as a valid alternative to formal proofs based on analytical derivation. Also, we derive from the proposed approach an $$O(n \log n)$$ time complexity heuristic. The heuristic splits the sorted job set in tuples of m consecutive jobs ($$1,\dots ,m; m+1,\dots ,2m;$$ etc.) and sorts the tuples in non-increasing order of the difference (slack) between largest job and smallest job in the tuple. Then, given this new ordering of the job set, list scheduling is applied. This approach strongly outperforms LPT on benchmark literature instances and is competitive with more involved approaches such as COMBINE and LDM.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.