Abstract

Fluid models have been the main tools for Internet congestion control. By capturing how the average rate of each flow evolves, the fluid model proves to be useful as it predicts the equilibrium point to which system trajectory converges and also provides conditions under which the convergence is ensured, i.e., the system is stable. However, due to inherent randomness in the network caused by random packet arrivals or random packet marking, the actual system evolution is always of a stochastic nature. In this paper, we show that we can be better off using a stochastic approach toward the congestion control. We first prove that the equilibrium point of a fluid model can be quite different from the true average rate of the corresponding stochastic system. After we describe the notion of stability for two different approaches, we show that a stable fluid model can impose too much restriction on our choice of system parameters such as buffer size or link utilization. In particular, under fluid models, we show that there exists a fundamental tradeoff between the link utilization and buffer size requirement for large systems, while in a more realistic setting with stochastic models, there is no such tradeoff. This implies that the current congestion control design can be much more flexible, to the benefit of efficient usage of network resources.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.