Abstract

Inspired by the potential power of random scheduling at data centers, a novel approach for combining arbitrary dispatching policy and tail-latency prediction in heterogeneous fork-join network environments is proposed. Tail prediction is of practical importance in commercial data centers, where the need for sharing resources between many applications is desired at most, to ensure client satisfaction with guaranteed service level objectives (SLOs). Lots of research works in parallel scheduling were presented using event-based simulations, but none of them were able to implant dynamic variation of tasks numbers and maintain the determined load region using a precise, and reliable approach. In this paper, we propose extensive case studies for the presented prediction model in heterogeneous black-box using model-driven simulations. Experimental results show that by using random scheduling algorithm accompanied with inserted effects of different requests fan-out, tail latency can be predicted and stay consistent with relative errors of 10% at high load regions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.