Abstract

As the Internet grows, it is becoming increasingly difficult to collect performance measurements, to monitor its state, and to perform simulations efficiently. This is because the size and the heterogeneity of the Internet makes it time-consuming and difficult to devise traffic models and analytic tools which would allow us to work with summary statistics. We explore a method to side step these problems by combining sampling, modeling, and simulation. Our hypothesis is this: if we take a sample of the input traffic and feed it into a suitably scaled version of the system, we can extrapolate from the performance of the scaled system to that of the original. Our main findings are as follows. When we scale an IP network which is shared by short- and long-lived TCP-like and UDP flows and which is controlled by a variety of active queue management schemes, then performance measures such as queueing delay and drop probability are left virtually unchanged. We show this in theory and in simulations. This makes it possible to capture the performance of large networks quite faithfully using smaller scale replicas.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.