Abstract

TCP congestion control is a vital component for the latency of Web services. In practice, a single congestion control mechanism is often used to handle all TCP connections on a Web server, e.g., Cubic for Linux by default. Considering complex and ever-changing networking environment, the default congestion control may not always be the most suitable one. Adjusting congestion control to meet different networking scenarios usually requires modification of TCP stacks on a server. This is difficult, if not impossible, due to various operating system and application configurations on production servers. In this paper, we propose Mystique, a light-weight, flexible, and dynamic congestion control switching scheme that allows network or server administrators to deploy any congestion control schemes transparently without modifying existing TCP stacks on servers. We have implemented Mystique in Open vSwitch (OVS) and conducted extensive test-bed experiments in both public and private cloud environments. Experiment results have demonstrated that Mystique is able to effectively adapt to varying network conditions, and can always employ the most suitable congestion control for each TCP connection. More specifically, Mystique can significantly reduce latency by 18.13% on average when compared with individual congestion controls.

Highlights

  • Recent years have seen many Web applications moved into cloud datacenters to take advantage of the economy of scale

  • Web server Pri-Cubic represents a server locates in private cloud and runs Cubic as its default congestion control (CC)

  • Pub-Reno demonstrates that this Web server located at public cloud (i.e., AWS) and Reno is configured as its CC

Read more

Summary

Introduction

Recent years have seen many Web applications moved into cloud datacenters to take advantage of the economy of scale. As the Web applications are becoming more interactive, service providers and users have become far more sensitive to network performance. This is because any increase in network latency always hurt experience and providers’ revenue. Administrators (or operators) opt to use network appliances such as TCP proxies and Corresponding authors: Lin Cui and Weijia Jia. Latency for Web service is closely linked to revenue and profit. Many service providers use network functions such as TCP proxies and WAN optimizers for reducing latency [3] [4]. Their scalability is of a great challenge, while TCP proxies go against TCP end-to-end semantics and WAN optimizers add additional compression and decompression complexity

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.