Abstract

Switching Architectures deploying shareable parallel memory modules are quite versatile in their ability to scale to higher capacity while retaining the advantage of sharing its entire memory resource among all input and output ports. The two main classes of such architectures, namely, the Shared Multibuffer-(SMB-) based switch and the Sliding-Window-(SW-) based packet switch, both deploy parallel memory modules that are physically separate but logically connected. Inspite of their similarity in regards to using shareable parallel memory modules, they differ in switching control and scheduling of packets to parallel memory modules. SMB switch uses centralized control whereas the SW switch uses a decentralized control for switching operations. In this paper, we present a new memory assignment scheme for the Sliding-Window (SW) switch for assigning packets to parallel memory modules that maximizes the parallel storage of packets to multiple memory modules. We compare the performance of a sliding-window switch deploying this new memory assignment scheme with that of an SMB switch architecture under conditions of identical traffic type and memory resources deployed. The simulation results show that the new memory assignment scheme for the sliding window switch maximizes parallel storage of packets input in a given switch cycle, and it does not require speed-up of memory modules. Furthermore, it provides a superior performance compared to that of the SMB switch under the constraints of fixed memory-bandwidth and memory resources.

Highlights

  • Due to the nature of bursty traffic in Internet, the router/switch architectures allowing the sharing of memory resource among the output ports are well suited to provide the best packet-loss and throughput performance [1, 2] for a fixed size memory on a switching-chip

  • Two well-known classes of switching architecture, namely, the Shared Multibuffer-(SMB-) based switch architecture [3, 4] and the Sliding-Window-(SW-) based packet switch architecture [5, 6] attempt to overcome this physical limitations of the memory-speed by deploying parallel memory modules that can be shared among all of input and output ports of these switches

  • The measures of interest considered in the simulation studies are the offered load for a bursty traffic of a given average burst length (ABL), memory-bandwidth requirement, switch throughput, packet-loss ratio (PLR), and memory utilization of the switch

Read more

Summary

Introduction

Due to the nature of bursty traffic in Internet, the router/switch architectures allowing the sharing of memory resource among the output ports are well suited to provide the best packet-loss and throughput performance [1, 2] for a fixed size memory on a switching-chip. It is important to design a memory assignment scheme that maximizes the parallel write of packets to different memory modules in a given switch cycle without requiring an increase in memory speed-up. This new memory assignment scheme aims at maximizing the parallel write of packets to multiple memory modules without requiring a speed-up of memory modules According to this scheme, we have an additional Array called Temp [i] for i = 1 to m, where m is the number of memory modules deployed in the switch. This means that the memory modules can operate at the line speed and do not need a speed-up

Performance Evaluation
Performance Results and Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call