This work describes various performance trade-offs that arise from the use of a technique for optical switching under various network topologies. Such switching operation can be summarized as follows: (a) user data are divided into fixed-length fragments, (b) each fragment is assigned to a different wavelength, and (c) all wavelengths are simultaneously switched to the egress links. This concept of dividing user data into several wavelengths to be simultaneously switched is called wavelength striping and its purpose is to reduce latency and increase throughput for short distance interconnects. We depart from previous work where a building block implementing this basic switching function has been built around semiconductor optical amplifiers (SOAs). In this paper, we investigate diverse trade-offs that arise from the use of this switching approach in different network topologies. One of the main issues addressed in this paper is the relation between cascadability and bit error rate (BER). In this case, our results indicate that a switch fabric can cascade up to five stages without exceeding a BER of 10−9 and without incurring in power budget problems. We also show that the performance degradation, introduced by cascading SOAs, can be compensated with a star interconnect architecture that is introduced. Other issues addressed in this paper are the effect of scalability on cost and the effect of latency on TCP performance and reliability.
Read full abstract