For latency-sensitive data processing applications in the cloud, concurrent data-parallel tasks need to be scheduled and processed quickly. A data-parallel task usually consists of a set of sub-tasks, generating a set of flows that are collectively referred to as coflows . The state-of-the-art schedulers collect coflow information in the cloud to optimize coflow-level performance. However, most of the coflows, classified as small coflows because they consist of only short flows, have been largely overlooked. This paper presents OptaX , a decentralized network scheduling service that collaboratively schedules data-parallel tasks’ small coflows. OptaX adopts a cross-layer, COTS (commercial off-the-shelf) switch-compatible design that leverages the sendbuffer information in the kernel to adaptively optimize flow scheduling in the network. Specifically, OptaX (i) monitors the system calls (syscalls) in the hosts to obtain their sendbuffer footprints, and (ii) recognizes small coflows and assigns high priorities to their flows. OptaX transfers these flows in a FIFO manner by adjusting TCP’s two attributes: window size and round-trip time. We have implemented OptaX as a Linux kernel module. The evaluation shows that OptaX is at least 2.2 × faster than fair sharing and 1.2 × faster than only assigning small coflows with the highest priority. We further apply OptaX to improve the small I/O performance of Ursa , a distributed block storage system that provides virtual disks where small I/O is dominant. URSA with OptaX achieves significant improvement compared to the original URSA for small I/O latency.
Read full abstract