Abstract

In recent years, issues regarding the behavior of TCP in high-speed and long-distance networks have been extensively addressed in the networking research community, both because TCP is the most widespread transport protocol in the current Internet and because bandwidth-delay product continues to grow. The well known problem of TCP in high bandwidthdelay product networks is that the TCP Additive Increase Multiplicative Decrease AIMD probing mechanism is too slow in adapting the sending rate to the end-to-end available bandwidth. To overcome this problem, many modifications have been proposed such as FAST TCP [1], STCP [2], HSTCP [3], HTCP [4], BIC TCP [5] and CUBIC TCP [6]. However, two key parameters are often omitted in the analysis of new congestion control proposals: they are the TCP retransmission queue size at the sender and the receiver queue that handles out-of-order packets. The first buffer is used by TCP sender to save all the outstanding packets, whereas the second one is used by the receiver entity to backlog all out-of-order received packets that cannot be withdrawn by the receiver application. A wrong choice of these buffer sizes can lead to remarkable underutilization of the link capacity. The goal of this work is to investigate, by using both analytical models and simulation results, optimal sizing of the retransmission and out-of-order buffers in case of modified TCP congestion control settings and to highlight differences between NewReno TCP and SACK TCP packet loss recovery schemes in terms of TCP internal buffers requirements. An important result is that the SACK option turns out to be particularly effective in reducing buffer requirements in the case of very high bandwidth-delay product links. I. BACKGROUND ON TCP CONGESTION CONTROL The basic TCP congestion control is essentially made of a probing phase and a decreasing phase. The probing phase of standard TCP consists of an exponential phase (i.e. the “Slow Start” phase) and a linear increasing phase (i.e. the “Congestion Avoidance” phase). The probing phase stops when congestion is experienced in the form of timeout or reception of DupThresh duplicate acknowledgments (DUPACKs)1. The TCP dynamic behaviour in “steady state” condition can be considered with good approximation a sequence of congestion avoidance phases followed by reception of DupThresh DUPACKs. When DupThresh DUPACKs This work is supported by the Italian Ministry for University and Research (MIUR) under the PRIN project FAMOUS (http://www.tnt.dist.unige.it/famous/) 1The default value of DupThresh is 3. are received, the TCP implements a multiplicative decrease behavior. The generalization of the classic additive increase multiplicative decrease TCP settings can be made as follows: a) On ACK reception cwnd ←− cwnd + a(cwnd) b) When DupThresh DUPACKs are received cwnd ←− cwnd − b · cwnd where a(cwnd) is 1/cwnd and b is 0.5 in the case of classic TCP. The most of TCP congestion control modifications proposed for high bandwidth-delay networks can be described by modifying a and b. Some protocols (e.g. STCP) employ constant values for a and b, whereas other protocols, such as HSTCP and CUBIC, modify them dynamically. All these protocols can independently use NewReno or SACK recovery procedure independently of the congestion control algorithm. NewReno TCP and SACK TCP differ in the recovery phase, i.e. when the TCP recovers from packet losses. In particular, NewReno TCP recovery phase is based only on the cumulative ACK information whereas SACK TCP receiver exploits the TCP Selective Acknowledge option to advertise the sender about out-of-order received blocks. This information is employed by the sender to recover from multiple losses more efficiently than NewReno TCP. II. ANALYTICAL MODEL The goal of this section is to investigate the effect of different increment and decrement factors used by congestion control algorithms and congestion control parameters in a single connection scenario. For sake of simplicity, we consider a single bottleneck scenario in the case of constant a and b values such as in the case of STCP congestion control. The considered network setting is shown in Figure 1, where B is the bottleneck buffer size, C is the link service rate (in units of packets/s), Tfw is the propagation delay from the TCP sender S to the bottleneck buffer, Tfb is the propagation delay from the bottleneck buffer to the TCP receiver R and then back to the sender. RTTm is Tfb + Tfw, and RTT is the sum of RTTm and the queuing delay in the bottleneck buffer (we are ignoring the packet transmission time 1/C). Standard TCP congestion control algorithm is made of a probing phase that increases the input rate up to fill the buffer and hit network capacity. At that point packets start to be lost and the receiver sends duplicate acknowledgments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call