We use a stochastic model to study the throughput performance of various transport control protocol (TCP) versions (Tahoe (including its older version that we call OldTahoe), Reno, and NewReno) in the presence of random losses on a wireless link in a local network. We model the cyclic evolution of TCP, each cycle starting at the epoch at which recovery starts from the losses in the previous cycle. TCP throughput is computed as the reward rate in a certain Markov renewal-reward process. Our model allows us to study the performance implications of various protocol features, such as fast retransmit and fast recovery. We show the impact of coarse timeouts. In the local network environment the key issue is to avoid a coarse timeout after a loss occurs. We show the effect of reducing the number of duplicate acknowledgements (ACKs) for triggering a fast retransmit. A large coarse timeout granularity seriously affects the performance of TCP, and the various protocol versions differ in their ability to avoid a coarse timeout when random loss occurs; we quantify these differences. We show that, for large packet-loss probabilities, TCP-Reno performs no better, or worse, than TCP-Tahoe. TCP-NewReno is a considerable improvement over TCP-Tahoe, and reducing the fast-retransmit threshold from three to one yields a large gain in throughput; this is similar to one of the modifications in the TCP-Vegas proposal. We explain some of these observations in terms of the variation of fast-recovery probabilities with packet-loss probability. The results of our analysis compare well with a simulation that uses actual TCP code.
Read full abstract