Abstract

<?tight?>Fork-join systems play a pivotal role in the analysis of distributed systems, telecommunication infrastructures, and storage systems. In this article, we consider a fork-join system consisting of K parallel servers, each of which works on one of the K tasks that form each job. The system allocates a fixed amount of computational resources among the K servers, hence determining their service speed. The goal of this article is that of studying the resource allocation policies among the servers. We assume that the queueing disciplines of the fork- and join-queues are First Come First Served. At each epoch, at most K tasks are in service while the others wait in the fork-queues. We propose an algorithm with a very simple implementation that allocates the computational resources in a way that aims at minimizing the join-queue lengths, and hence at reducing<?brk?> the expected job service time. We study its performance in saturation and under exponential service time. The model has an elegant closed-form stationary distribution. Moreover, we provide an algorithm to numerically or symbolically derive the marginal probabilities for the join-queue lengths. Therefore, the expressions for the expected join-queue length and the expected response time under immediate join can be derived. Finally, we compare the performance of the proposed resource allocation algorithm with that of other strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call