Abstract

When buffer resources are deployed in the switch, shared‐memory based packet switches are known to supply the best possible performance for bursty data traffic in networks and the Internet. Nevertheless, scaling of shared‐memory packet switches to larger sizes has been limited and then packets can not be processed in a high speed network, chiefly because of the physical restrictions imposed by the memory operation rate and the centralized strategy for switching functions in shared‐memory switches. In this investigation, a scalable switch for a high speed network, which is called the parallel packet switch (PPS), is studied to overcome these constraints. The PPS comprises multiple packet switches operating independently and in parallel. The PPS class is characterized by the deployment of parallel center‐stage switches with memory buffers running slower than the external line rate. Each lower speed packet switch operates at a fraction of the external line rate R. For example, each packet switch can operate at an internal line rate R/K, where K is the number of center‐stage switches. This study develops and investigates a PPS which distributes cells or variable‐length packets to low‐speed switches and uses outputs with push‐in arbitrary‐out (PIAO) queues. We present a novel Markov chain model that successfully analyzes and exhibits PPS performance characteristics for throughput, cell delay and cell drop rate. Simulation comparison demonstrates that the developed Markov chain model is accurate for practical network loads and the PPS with PIAO queues provides considerably better performance than previously known classes of shared‐memory switch architecture.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call