Abstract

The continuous growth of the Internet traffic is boosting the requirement of ultra-high-speed optical interconnects in datacenters. This growth is largely driven by cloud computing, mobility, and the Internet of Things. Currently, the evolution from 100-Gbps Ethernet to 400 Gbps is under discussion within the IEEE P802.3bs 400-Gigabit Ethernet Task Force [1]. Among different approaches, the four-lane 100-Gbps scheme is particularly attractive for 500-m and 2-km single-mode (SM) fiber applications as it allows lower lane counts and thus offers higher spatial efficiency. For the intra-datacenter communication, avoiding complex transceivers is crucial in terms of cost and power consumption. Consequently, intensity modulation and direct detection (IMDD) links are preferred rather than coherent transmission technologies. Non-return to zero (NRZ) on–off keying (OOK) [2, 3] keeps the optical hardware simple but presents a big challenge to the bandwidth of transceivers in applications using 100 Gbps and beyond. Advanced modulation formats, such as four-level pulse-amplitude modulation (PAM-4) [4, 5] discrete multitone (DMT) [6, 7], and electrical duobinary (EDB) [8–10], overcome this limitation by improving the spectral efficiency while maintaining the benefits of direct detection. However, most of the realized 100-Gbps class DMT, PAM-4, and EDB demonstrations [3–8] are based on offline digital signal processing (DSP). The realization of 100-Gbps class real-time DMT transmission [11] is hindered by the huge amount of power-consuming calculations. In [12], a real-time 112-Gbps PAM-4 optical link over 2-km standard single-mode fiber (SSMF) was demonstrated with a SiGe BiCMOS transceiver [including clock and data recovery (CDR)] but with a high power consumption of ~8.6 W.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call