Abstract

Future computer networks are expected to carry bursty traffic. Shortest -path routing protocols such as OSPF and RIP have t he disadvantage of causing bottlenecks due to their inherent single -path routing. That is, the uniformly selected shortest path between a source and a destination may become highly congested even when many other paths have low utilization. We propose a family of routing schemes that distribute data traffic over the whole network via bounded randomization; in this way, they remove bottlenecks and consequently improve network performance. For each data message to be sent from a source s to a destination d, each of the proposed routing protocols randomly choose an intermediate node e from a selected set of network nodes, and routes the data message along a shortest path from s to e. Then, it routes the data message via a shortest path from e to d. Intuitively, we would expect that this increase the effective bandwidth between each source -destination pair. Our simulation results indicate that the family of proposed load -balanced routing protocols distribute traffic evenly over the whole network and , in consequenc e, increases network performance with respect to throughput, message loss, message delay and link utilization. Moreover, implementing our scheme requires only a simple extension to any shortest-path routing protocol.

Highlights

  • In a wide -area store -and-forward computer network, such as the Internet, routing protocols are essential

  • Our simulation studies were done on the Maryland Routing Simulator (MaRS) [1], which is a network simulator developed at the University of Maryland

  • Load-Balanced Routing (LBR) -FR exhibits a good performance with respect to throughput, message loss and message delay when the number of connections is low; it exhibits the worst performance among the consi dered routing protocols when the number of connections is high

Read more

Summary

Introduction

In a wide -area store -and-forward computer network, such as the Internet, routing protocols are essential They are mechanisms for finding an efficient path between any pair of source and destination nodes in the network and for routing data messages along this path. The shortest path may be highly congested, even when many other path s to the destination have low link utilization This congestion may trigger the loss of valuable data messages due to buffer overflow at some node. If the network uses shortest-path routing protocols t o carry bursty traffic, many of these data messages might be dropped due to the limited buffer space of each node when these shortest paths are congested. We sketch the max-flow/min-cut theorem [8]

Networks and Flows
Maximum Flow and Minimum Cut in a Network
Overview of the Load-Balanced Routing
Load-Balanced Routing via Full Randomization
Load-Balanced Routing via Bounded Randomization
Protocol Notation
Specification of the Load-Balanced Routing Protocol
Simulation Model
Simulation Results
Future Work
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.