Articles published on Unrelated Machines
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
247 Search results
Sort by Recency
- Research Article
- 10.1145/3785408
- Dec 16, 2025
- Journal of the ACM
- George Christodoulou + 2 more
We show that the best approximation ratio of deterministic truthful mechanisms for makespan-minimization for n unrelated machines is n , as it was conjectured by Noam Nisan and Amir Ronen.
- Research Article
- 10.1145/3774415
- Nov 3, 2025
- ACM Transactions on Algorithms
- David G Harris
We describe a new dependent-rounding algorithmic framework for bipartite graphs. Given a fractional assignment \(\vec{x}\) of values to edges of a graph \(G=(U\cup V,E)\) , the algorithms return an integral solution \(\vec{X}\) such that each right-node \(v\in V\) has at most one neighboring edge \(f\) with \(X_{f}=1\) , and the variables \(X_{e}\) also satisfy broad nonpositive-correlation properties. In particular, for any edges \(e_{1},e_{2}\) sharing a left-node \(u\in U\) , the variables \(X_{e_{1}},X_{e_{2}}\) have strong negative correlation, i.e. the expectation of \(X_{e_{1}}X_{e_{2}}\) is significantly below \(x_{e_{1}}x_{e_{2}}\) . This algorithm is based on generating negatively-correlated Exponential random variables and using them for a rounding method inspired by a contention-resolution scheme of Im & Shadloo (2020). Our algorithm gives stronger and much more flexible negative correlation properties. Dependent rounding schemes with negative correlation properties have been used for approximation algorithms for job-scheduling on unrelated machines to minimize weighted completion times (Bansal, Srinivasan, & Svensson (2021), Im & Shadloo (2020), Im & Li (2023)). Using our new dependent-rounding algorithm, among other improvements, we obtain a \(1.398\) -approximation for this problem. This significantly improves over the prior \(1.45\) -approximation ratio of Im & Li (2023).
- Research Article
- 10.1007/s10732-025-09571-4
- Oct 7, 2025
- Journal of Heuristics
- Mirosław Ławrynowicz + 1 more
In this paper, we present a study of the robust (non-deterministic) job scheduling problem with interval release dates on unrelated machines. Robust optimization is a tractable alternative to stochastic optimization suited for problems in which parameter distributions are undetermined due to insufficient data. Our problem formulation involves minimizing the maximum regret, defined as the worst-case deviation from an optimal makespan, with the assumption that each release date belongs to a well-defined interval. Mathematical analysis is accomplished to investigate the properties of the robust scheduling problem examining the inapproximability, determination of the upper bound of a regret function, and selection of feasible scenarios. In order to solve the robust scheduling problem, we propose an algorithm that implements a greedy strategy and schedules jobs using only the deterministic criterion (the makespan) instead of assessing the worst-case regret value. We compare the quality of our schedules with the solutions obtained by the properly adapted and tuned simulated annealing algorithm. Additionally, the applicability of our robust approach (robust framework) in the Scheduling-Location problem (deterministic problem in which the parameters are precisely defined), which combines the facility location problem and job scheduling problem, is demonstrated. The numerical evaluation shows that including the uncertainty and regret-averse (the minimax regret criterion) can allow, in certain cases, to achieve optimal solutions for incomplete dataset.
- Research Article
- 10.46298/theoretics.25.17
- Aug 14, 2025
- TheoretiCS
- Yuda Feng + 1 more
We give the first $O(1)$-approximation for the weighted Nash Social Welfare problem with additive valuations. The approximation ratio we obtain is $e^{1/e} + ε\approx 1.445 + ε$, which matches the best known approximation ratio for the unweighted case. Both our algorithm and analysis are simple. We solve a natural configuration LP for the problem, and obtain the allocation of items to agents using a randomized version of the Shmoys-Tardos rounding algorithm developed for unrelated machine scheduling problems. In the analysis, we show that the approximation ratio of the algorithm is at most the worst gap between the Nash social welfare of the optimum allocation and that of an EF1 allocation, for an unweighted Nash Social Welfare instance with identical additive valuations. This was shown to be at most $e^{1/e} \approx 1.445$ by Barman, Krishnamurthy and Vaish, leading to our approximation ratio. 12 pages. This is the TheoretiCS journal version
- Research Article
- 10.1007/s10951-025-00848-x
- Jul 5, 2025
- Journal of Scheduling
- Baruch Mor + 2 more
Polynomial-time solutions for minimizing total load on unrelated machines with position-dependent processing times and rate-modifying activities
- Research Article
- 10.1016/j.tcs.2025.115258
- Jul 1, 2025
- Theoretical Computer Science
- George Karakostas + 1 more
Approximation algorithms for maximum weighted throughput on unrelated machines
- Research Article
3
- 10.1145/3718360
- May 14, 2025
- ACM Transactions on Economics and Computation
- George Christodoulou + 2 more
We study truthful mechanisms for allocation problems in graphs, both for the minimization (i.e., scheduling) and maximization (i.e., auctions) setting. The minimization problem is a special case of the well-studied unrelated machines scheduling problem, in which every given task can be executed only by two pre-specified machines in the case of graphs or a given subset of machines in the case of hypergraphs. This corresponds to a multigraph whose nodes are the machines and its hyperedges are the tasks. This class of problems belongs to multidimensional mechanism design, for which there are no known general mechanisms other than the VCG and its generalization to affine minimizers. We propose a new class of truthful mechanisms that have significantly better performance than affine minimizers in many settings. Specifically, we provide upper and lower bounds for truthful mechanisms for general multigraphs, as well as special classes of graphs such as stars, trees, planar graphs, k -degenerate graphs, and graphs of a given treewidth. We also consider the objective of minimizing or maximizing the L p -norm of the values of the players, a generalization of the makespan minimization that corresponds to p = ∞, and extend the results to any p > 0.
- Research Article
- 10.1145/3711872
- May 13, 2025
- ACM Transactions on Parallel Computing
- Alexander Lindermayr + 1 more
In non-clairvoyant scheduling, the task is to schedule jobs with a priori unknown processing requirements. We revisit this well-studied problem with the objective of minimizing the total (weighted) completion time in a recently popular learning-augmented setting that integrates possibly imperfect predictions into online algorithm design. While previous works used predictions on processing requirements, we propose a new prediction model that provides a relative order of jobs, which could be seen as predicting algorithmic actions rather than parts of the unknown input. We show that these succinct predictions have desired properties, admit a natural error measure, and enable algorithms with strong performance guarantees. Additionally, these predictions are learnable in both theory and practice. We generalize the algorithmic framework proposed in the seminal article by Purohit, Kumar, and Svitkina (NeurIPS 2018) and present the first learning-augmented scheduling results for weighted jobs and unrelated machines. We demonstrate in empirical experiments the practicability and superior performance compared with the previously suggested single-machine algorithms.
- Research Article
1
- 10.1609/aaai.v39i13.33513
- Apr 11, 2025
- Proceedings of the AAAI Conference on Artificial Intelligence
- Michal Feldman + 3 more
We study fair mechanisms for the classic job scheduling problem on unrelated machines with the objective of minimizing the makespan. This problem is equivalent to minimizing the egalitarian social cost in the fair division of chores. The two prevalent fairness notions in the fair division literature are envy-freeness and proportionality. Prior work has established that no envy-free mechanism can provide better than an Ω(log m / log log m)-approximation to the optimal makespan, where m is the number of machines, even when payments to the machines are allowed. In strong contrast to this impossibility, our main result demonstrates that there exists a proportional mechanism (with payments) that achieves a 3/2-approximation to the optimal makespan, and this ratio is tight. To prove this result, we provide a full characterization of allocation functions that can be made proportional with payments. Furthermore, we show that for instances with normalized costs, there exists a proportional mechanism that achieves the optimal makespan. We conclude with important directions for future research concerning other fairness notions, including relaxations of envy-freeness. Notably, we show that the technique leading to the impossibility result for envy-freeness does not extend to its relaxations.
- Research Article
3
- 10.1007/s10951-024-00827-8
- Jan 30, 2025
- Journal of Scheduling
- Martin Koutecký + 1 more
The task of scheduling jobs to machines while minimizing the total makespan, the sum of weighted completion times, or a norm of the load vector are among the oldest and most fundamental tasks in combinatorial optimization. Since all of these problems are in general NP-hard, much attention has been given to the regime where there is only a small number k of job types, but possibly the number of jobs n is large; this is the few job types, high-multiplicity regime. Despite many positive results, the hardness boundary of this regime was not understood until now. We show that makespan minimization on uniformly related machines (Q|HM|Cmax) is NP-hard already with 6 job types, and that the related Cutting Stock problem is NP-hard already with 8 item types. For the more general unrelated machines model (R|HM|Cmax), we show that if the largest job size pmax or the number of jobs n is polynomially bounded in the instance size |I|, there are algorithms with complexity |I|poly(k). Our main result is that this is unlikely to be improved because Q||Cmax is W[1]-hard parameterized by k already when n, pmax, and the numbers describing the machine speeds are polynomial in |I|; the same holds for R||Cmax (without machine speeds) when the job sizes matrix has rank 2. Our positive and negative results also extend to the objectives ℓ2-norm minimization of the load vector and, partially, sum of weighted completion times ∑wjCj. Along the way, we answer affirmatively the question whether makespan minimization on identical machines (P||Cmax) is fixed-parameter tractable parameterized by k, extending our understanding of this fundamental problem. Together with our hardness results for Q||Cmax, this implies that the complexity of P|HM|Cmax is the only remaining open case.
- Research Article
2
- 10.1007/s40747-024-01677-9
- Dec 5, 2024
- Complex & Intelligent Systems
- Nikolina Frid + 2 more
The concept of green scheduling, which deals with the environmental impact of the scheduling process, is becoming increasingly important due to growing environmental concerns. Most green scheduling problem variants focus on modelling the energy consumption during the execution of the schedule. However, the dynamic unrelated machines environment is rarely considered, mainly because it is difficult to manually design simple heuristics, called dispatching rules (DRs), which are suitable for solving dynamic, non-standard scheduling problems. Using hyperheuristics, especially genetic programming (GP), alleviates the problem since it enables the automatic design of new DRs. In this study, we apply GP to automatically design DRs for solving the green scheduling problem in the unrelated machines environment under dynamic conditions. The total energy consumed during the system execution is optimised along with two standard scheduling criteria. The three most commonly investigated green scheduling problem variants from the literature are selected, and GP is adapted to generate appropriate DRs for each. The experiments show that GP-generated DRs efficiently solve the problem under dynamic conditions, providing a trade-off between optimising standard and energy-related criteria.
- Research Article
- 10.1007/s00453-024-01277-6
- Oct 12, 2024
- Algorithmica
- Yansong Gao + 1 more
The problem of scheduling unrelated machines has been studied since the inception of algorithmic mechanism design (Nisan and Ronen, Algorithmic mechanism design(extended abstract). In: Proceedings of the Thirty First Annual ACM Symposium on Theory of Computing (STOC), pp. 129–140, 1999. It is a resource allocation problem that entails assigning m tasks to n machines for execution. Machines are regarded as strategic agents who may lie about their execution costs so as to minimize their time cost. To address the situation when monetary payment is not an option to compensate the machines’ costs, Koutsoupias (Theory Comput Syst 54:375–387, 2014) devised two truthful mechanisms, K and P respectively, that achieves an approximation ratio of n+12 and n, for social cost minimization. In addition, no truthful mechanism can achieve an approximation ratio better than n+12. Hence, mechanism K is optimal. While the approximation ratio provides a strong worst-case guarantee, it also limits us to a comprehensive understanding of mechanism performance on various inputs. This paper investigates these two scheduling mechanisms beyond the worst case. We first show that mechanism K achieves a smaller social cost than mechanism P on every input. That is, mechanism K is pointwise better than mechanism P. Next, for each task, when machines’ execution costs are independent and identically drawn from a task-specific distribution, we show that the average-case approximation ratio of mechanism K converges to a constant determined by the task-specific distribution. This bound is tight for mechanism K. For a better understanding of this distribution-dependent constant, on the one hand, we estimate its value by plugging in a few common distributions; on the other, we show that this converging bound improves a known bound (Zhang in Algorithmica 83(6):1638–1652, 2021)) which only captures the single-task setting. Last, we find that the average-case approximation ratio of mechanism P converges to the same constant.
- Research Article
2
- 10.1007/s10878-024-01205-y
- Oct 1, 2024
- Journal of Combinatorial Optimization
- Imed Kacem + 1 more
We study scheduling problems with release times and rejection costs with the objective function of minimizing the maximum lateness. Our main result is a PTAS for the single machine problem with an upper bound on the rejection costs. This result is extended to parallel, identical machines. The corresponding problem of minimizing the rejection costs with an upper bound on the lateness is also examined. We show how to compute a PTAS for determining an approximation of the Pareto frontier on both objective functions on parallel, identical machines. Moreover, we present an FPTAS with strongly polynomial time for the maximum lateness problem without release times on identical machines when the number of machines is constant. Finally, we extend this FPTAS to the case of unrelated machines.
- Research Article
1
- 10.1007/s10107-024-02132-w
- Aug 8, 2024
- Mathematical Programming
- Franziska Eberle + 4 more
The configuration balancing problem with stochastic requests generalizes well-studied resource allocation problems such as load balancing and virtual circuit routing. There are given m resources and n requests; each request has multiple possible configurations, each of which increases the load of each resource by some amount. The goal is to select one configuration for each request to minimize the makespan: the load of the most-loaded resource. In the stochastic setting, the amount by which a configuration increases the resource load is uncertain until the configuration is chosen, but we are given a probability distribution. We develop both offline and online algorithms for configuration balancing with stochastic requests. When the requests are known offline, we give a non-adaptive policy for configuration balancing with stochastic requests that O(logmloglogm)-approximates the optimal adaptive policy, which matches a known lower bound for the special case of load balancing on identical machines. When requests arrive online in a list, we give a non-adaptive policy that is O(logm) competitive. Again, this result is asymptotically tight due to information-theoretic lower bounds for special cases (e.g., for load balancing on unrelated machines). Finally, we show how to leverage adaptivity in the special case of load balancing on related machines to obtain a constant-factor approximation offline and an O(loglogm)-approximation online. A crucial technical ingredient in all of our results is a new structural characterization of the optimal adaptive policy that allows us to limit the correlations between its decisions.
- Research Article
1
- 10.1016/j.cor.2024.106617
- Mar 11, 2024
- Computers & Operations Research
- Baruch Mor + 1 more
Due-Date assignment with acceptable lead-times on parallel machines
- Research Article
1
- 10.3390/a17020067
- Feb 4, 2024
- Algorithms
- Marko Đurasević + 3 more
The automated design of dispatching rules (DRs) with genetic programming (GP) has become an important research direction in recent years. One of the most important decisions in applying GP to generate DRs is determining the features of the scheduling problem to be used during the evolution process. Unfortunately, there are no clear rules or guidelines for the design or selection of such features, and often the features are simply defined without investigating their influence on the performance of the algorithm. However, the performance of GP can depend significantly on the features provided to it, and a poor or inadequate selection of features for a given problem can result in the algorithm performing poorly. In this study, we examine in detail the features that GP should use when developing DRs for unrelated machine scheduling problems. Different types of features are investigated, and the best combination of these features is determined using two selection methods. The obtained results show that the design and selection of appropriate features are crucial for GP, as they improve the results by about 7% when only the simplest terminal nodes are used without selection. In addition, the results show that it is not possible to outperform more sophisticated manually designed DRs when only the simplest problem features are used as terminal nodes. This shows how important it is to design appropriate composite terminal nodes to produce high-quality DRs.
- Research Article
- 10.3390/axioms13010037
- Jan 5, 2024
- Axioms
- Marko Đurasević + 1 more
Dynamic scheduling represents an important class of combinatorial optimisation problems that are usually solved with simple heuristics, the so-called dispatching rules (DRs). Designing efficient DRs is a tedious task, which is why it has been automated through the application of genetic programming (GP). Various approaches have been used to improve the results of automatically generated DRs, with ensemble learning being one of the best-known. The goal of ensemble learning is to create sets of automatically designed DRs that perform better together. One of the main problems in ensemble learning is the selection of DRs to form the ensemble. To this end, various ensemble construction methods have been proposed over the years. However, these methods are quite computationally intensive and require a lot of computation time to obtain good ensembles. Therefore, in this study, we propose several simple heuristic ensemble construction methods that can be used to construct ensembles quite efficiently and without the need to evaluate their performance. The proposed methods construct the ensembles solely based on certain properties of the individual DRs used for their construction. The experimental study shows that some of the proposed heuristic construction methods perform better than more complex state-of-the-art approaches for constructing ensembles.
- Research Article
2
- 10.1016/j.ejor.2023.12.030
- Jan 3, 2024
- European Journal of Operational Research
- Vipin Ravindran Vijayalakshmi + 2 more
We consider a natural, yet challenging variant of the parallel machine scheduling problem in which each machine imposes a preferential order over the jobs and schedules the jobs accordingly once assigned to it. We study the problem of minimizing the total completion time, distinguishing between identical and unrelated machines, machine-dependent and identical priority lists, or a constant number of different priority classes. Additionally, we consider the setting in which the priority list on a machine must satisfy longest processing time first. We resolve the computational complexity of the problem and provide a clear distinction between problems that are polynomial time solvable and APX-hard.
- Research Article
4
- 10.1080/00207543.2023.2275634
- Nov 1, 2023
- International Journal of Production Research
- Ioannis Avgerinos + 3 more
Motivated by the need of quick job (re-)scheduling, we examine an elaborate scheduling environment under the objective of total weighted tardiness minimisation. The examined problem variant moves well beyond existing literature, as it considers unrelated machines, sequence-dependent and machine-dependent setup times, and a renewable resource constraint on the number of simultaneous setups. For this variant, we provide a relaxed MILP to calculate lower bounds, thus estimating a worst-case optimality gap. As a fast exact approach appears not plausible for instances of practical importance, we extend known (meta-)heuristics to deal with the problem at hand, coupling them with a Constraint Programming (CP) component – vital to guarantee the non-violation of the problem's constraints – which optimally allocates resources with respect to tardiness minimisation. The validity and versatility of employing different (meta-)heuristics exploiting a relaxed MILP as a quality measure are revealed by our extensive experimental study, which shows that the methods deployed have complementary strengths depending on the instance parameters. Since the problem description has been obtained from a textile manufacturer where jobs of diverse size arrive continuously under tight due dates, we also discuss the practical impact of our approach in terms of both tardiness decrease and broader managerial insights.
- Research Article
15
- 10.1016/j.swevo.2023.101318
- Jul 1, 2023
- Swarm and Evolutionary Computation
- Marko Đurasević + 3 more
Combining single objective dispatching rules into multi-objective ensembles for the dynamic unrelated machines environment