Abstract
SummarySharing the network infrastructure, the performance of emerging distributed applications and services in data centers is directly impacted by the network. As these applications are becoming more and more demanding, it is challenging to satisfy their requirements of low latency, high throughput, and low packet loss rate simultaneously. Prior approaches typically resort to flow control or scheduling mechanisms, prioritizing flows according to their demands. However, none of the methods can solely satisfy the various demands of data center applications. Addressing this challenge, we propose tasch, a preference aware flow scheduling mechanism equipped in the software network edge (ie, end‐host networking). This mechanism utilizes multiple separate queues for flows with different preferences, which guarantees low packet delay for latency‐sensitive flows and provides bandwidth guarantees for throughput‐sensitive flows. A coordinating algorithm is presented to share the network resource among multiple queues with pareto‐optimality. tasch is implemented as a thin and plugable kernel module in Linux based hypervisors, which lies between the complicated physical network and tenants VMs. Subsequently, based on the flow traces of real‐world applications, extensive experiments were conducted to verify the effectiveness of network management mechanism.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Concurrency and Computation: Practice and Experience
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.