Abstract

As the transmission speeds of emerging data networks scale up, the effects of propagation delays, which do not scale, become quite consequential for the design of sliding windows which are needed for congestion control. It was previously shown that optimal window lengths grow linearly with transmission speed λ, thus making the cost of memory for buffers a major factor. However, it was also shown that the moments of the number of packets in the buffers are onlyO(\(\sqrt {{\mathbf{ }}\lambda }\)), the remaining packets are in the course of being propagated. This fact underlies the proposal made here which requires smallO(\(\sqrt {{\mathbf{ }}\lambda {\mathbf{ }}ln{\mathbf{ }}\lambda }\)) buffers and yet guarantees that the ratio of the realized throughput to the ideal throughput approaches unity with increasing λ. That is, buffers when properly sized overflow so rarely that even with a rudimentary (conversely, easily implemented) protocol like go-back-n, the loss in throughput due to retransmissions is negligible. This result is arrived at by obtaining an explicit characterization for large λ of thetail of the distribution of buffer occupancy in the closed network with window sized buffers; in the case of a single-hop virtual circuit the characterization is by a Gaussian conditioned to be nonnegative. Numerical and simulation results are presented to corroborate the performance predictions of the theory for the case of 45 Mbits/sec transmission speed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.