The traditional output buffer architecture in ATM switches is realized in one of two forms – shared or separate. In the shared design, all of the N output links utilize a large common buffer, of size S cells, in an effort to achieve efficient use of the total buffer space. Under the separate buffer architecture, distinct buffers, each of size S N cells, are assigned to every output link, the goal being to realize fair buffer usage by every individual output link. For both architectures, however, the buffer organization is determined, permanently, at the time of the switch fabric design, and may not be altered during actual operation. The highly dynamic and stochastic nature of ATM traffic poses a key difficulty for this tradition. This paper introduces a radically new approach, termed Predictive Dynamic Output Buffer Reconfiguration (PDOBR) architecture, wherein the output buffer organization in the switch fabric is reconfigured dynamically, i.e., during network operation, under the call processor’s control, such that the network incurs minimal cell drop stemming from buffer overflow. Under PDOBR, the output buffer at every node of the ATM network is organized in the form of separate buffers for each output link, of size S N cells, plus a “floating” buffer of the same size, that may be appended, at runtime, to any one of the output links to augment its net buffer capacity. In contrast to the shared buffer that experiences severe congestion and gross unfairness under bursty traffic, separate buffers result is fairness in buffer availability between the output links, yielding efficient behavior. Utilizing its knowledge of the successful user call requests and the magnitudes of the corresponding sustained cell rate (SCR) bandwidth requests, the call processor at every ATM node computes the net bandwidth commitment for each of its output links through a simple summation of the corresponding SCR values. The call processor then compares the net bandwidth commitment for every output link against an empirical threshold, obtained through systematic analysis, to predict and identify a single output link, if any, that is likely to incur relatively high cell traffic. The “floating” buffer is then appended to the output link prior to launching the corresponding user’s traffic. Experiments confirm logical thinking that an increase in the size of the “floating” buffer should result in a superior efficiency for PDOBR. Further analysis reveals that a combination of faster technology, FAST Schottky TTL, and a threshold setting of 70 Mb/s yields a very high throughput of 85.05% and low cell drop rate of 14.95%. However, the highest performance is achieved – namely a high throughput of 91.56% coupled with a low cell drop rate of 8.44%, when the “floating” buffer size is increased by 18.98%, the difference, Δ L 2, in the absolute cell drop rates resulting from 70 Mb/s and 100 Mb/s threshold scenarios, and the threshold is set at 70 Mb/s.
Read full abstract