Abstract

Congestion control protocols for background data are commonly conceived and designed to emulate low priority traffic, which yields to transmission control protocol (TCP) flows. In the presence of even a few very long TCP flows, this behavior can cause bandwidth starvation, and hence, the accumulation of large numbers of background data flows for prolonged periods of time, which may ultimately have an adverse effect on the download delays of delay-sensitive TCP flows. In this paper, we look at the fundamental problem of designing congestion control protocols for background traffic with the minimum impact on short TCP flows while achieving a certain desired average throughput over time . The corresponding optimal policy under various assumptions on the available information is obtained analytically. We give tight bounds of the distance between TCP-based background transfer protocols and the optimal policy, and identify the range of system parameters for which more sophisticated congestion control makes a noticeable difference. Based on these results, we propose an access control algorithm for systems where control on aggregates of background flows can be exercised, as in file servers. Simulations of simple network topologies suggest that this type of access control performs better than protocols emulating low priority over a wide range of parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call