Abstract
In the era of open data and open science, it is important that, before announcing their new results, authors consider all previous studies and ensure that they have competitive material worth publishing. To save time, it is popular to replace the exhaustive search of online databases with the utilization of generative Artificial Intelligence (AI). However, especially for problems in niche domains, generative AI results may not be precise enough and sometimes can even be misleading. A typical example is P||Cmax, an important scheduling problem studied mainly in a wider context of parallel machine scheduling. As there is an uncovered symmetry between P||Cmax and other similar optimization problems, it is not easy for generative AI tools to include all relevant results into search. Therefore, to provide the necessary background data to support researchers and generative AI learning, we critically discuss comparisons between algorithms for P||Cmax that have been presented in the literature. Thus, we summarize and categorize the “state-of-the-art” methods, benchmark test instances, and compare methodologies, all over a long time period. We aim to establish a framework for fair performance evaluation of algorithms for P||Cmax, and according to the presented systematic literature review, we uncovered that it does not exist. We believe that this framework could be of wider importance, as the identified principles apply to a plethora of combinatorial optimization problems.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have