Abstract

Multi-server systems have received increasing attention with important implementations such as Google MapReduce, Hadoop, and Spark. Common to these systems are a fork operation, where jobs are first divided into tasks that are processed in parallel, and a later join operation, where completed tasks wait until the results of all tasks of a job can be combined and the job leaves the system. The synchronization constraint of the join operation makes the analysis of fork-join systems challenging and few explicit results are known. In this work, we model fork-join systems using a max-plus server model that enables us to derive statistical bounds on waiting and sojourn times for general arrival and service time processes. We contribute end-to-end delay bounds for multi-stage fork-join networks that grow in $\mathcal{O}(h \ln k)$ for $h$ fork-join stages, each with $k$ parallel servers. We perform a detailed comparison of different multi-server configurations and highlight their pros and cons. We also include an analysis of single-queue fork-join systems that are non-idling and achieve a fundamental performance gain, and compare these results to both simulation and a live Spark system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.