Abstract

Despite the fact that Optical Packet Switching (OPS) emerges as a promising solution for future Data Center (DC) networks, towards increasing capacity and radix, while retaining sub-μs latency performance, the requirement for ultra-fast burst-mode reception has been a serious restraining factor. We attempt to overcome this limitation and demonstrate, for the first time to our knowledge, an end-to-end optical packet switch link through the 1024-port 25.6 Tb/s Hipoλaos OPS, featuring burst-mode reception with <; 50 ns locking time. The switch performance for unicast traffic is evaluated via Bit-Error-Rate measurements and error-free performance at 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">9</sup> is reported for all validated port-combinations, with a mean power penalty of 2.88 dB. Moreover, multicast flows from two different ports of the switch were successfully received validating the architecture's credentials for efficient multicast packet delivery. Taking one step further towards a realistic evaluation of an OPS-enabled DC, a simulation analysis was conducted, proving that low-latency performance, including the burst-mode reception time-overhead, can be successfully realized in a Hipoλaos-switched DC with up-to 100% throughput for a variety of traffic profiles.

Highlights

  • At the dawning of the exaflop era, the growing demand for ubiquitous high-bandwidth processing and cloud computing applications, along with the boost of IoT and big-data analytics, has stimulated an unprecedented increase on the data traffic residing within Data Centers (DC) [1]

  • The payload size was defined as the maximum supported by the employed Pulse Pattern Generator (PPG) device, in order to stress the capability of the employed Burst-Mode Clock & Data Recovery (BM-CDR) to retain its frequency-lock even for extremely long packet sizes

  • Despite the fact that hyperscale DC switches have been able to push the capacity envelope up-to 25.6 Tb/s, scaling beyond that point is expected to build upon novel concepts with Optical packet switching promising a viable solution towards increasing capacity and radix, while retaining sub-μs latency performance

Read more

Summary

Introduction

At the dawning of the exaflop era, the growing demand for ubiquitous high-bandwidth processing and cloud computing applications, along with the boost of IoT and big-data analytics, has stimulated an unprecedented increase on the data traffic residing within Data Centers (DC) [1]. We have recently introduced the Hipoλaos OPS architecture that supports up-to 1024-port and 25.6 Tb/s capacity configurations in conjunction with sub-μsec latency values [23]–[28] Still, all these demonstrations have been limited to the realization of the optical forwarding plane assuming synchronized source and destination nodes and ignoring the requirement for asynchronous packet traffic between the OPS network nodes. As current and future network protocols feature short data-bursts or radio on/off functionality, BM-CDR devices with ns-scale settling are being actively developed supporting rates up-to or above 25 Gb/s in order to keep pace with the respective protocols’ lanerates [30], [38], [36], [39].To achieve this datarate scaling researchers have relied on different techniques and technologies, such as advanced FinFet CMOS platforms, additional equalizers that compensate the bandwidth limitations introduced by optical (de)modulators, or even clean clock sources originating from an external source or PLL. The incorporation of OPS in a DC environment could potentially bring significant performance and energy/cost benefits, as indicated in several studies [19], [40], provided that some additional, yet limited, modifications are enforced in the protocol stack, as has been the case with the recent admission of OCS solutions, where SDN orchestration ensured interoperability with already deployed equipment

The Hipoλaos Data- and Control-Plane Architecture for End-to-End OPS Links
Evaluation With Unicast Burst-Mode Packets
Evaluation of Multicast Packet Delivery
Scalability to 1024 Switching Ports
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call