Abstract
The high-performance server compute landscape is changing. The traditional model of building general-purpose enterprise compute boxes that end-users can configure with storage and networking to assemble their desired compute environments, has evolved to purpose-built systems optimized for specific applications. This tight integration of hardware and software components together with high-density midboard optical modules and an optical backplane allows for unprecedented levels of switching and compute efficiencies and has fueled the penetration of optical interconnects deep “inside the box,” particularly for switch scale-up. We briefly review earlier 40 G/port switching systems based on active optical cables, and present our newest system: An all-optically-interconnected 100 G/port 8.2 Tb/s InfiniBand packet switch ASIC with 41 ports running 100 Gb/s per port interconnected by 12-channel midboard optical transceivers with 25 Gb/s per channel per direction of optical I/O. Using a blind-mate optical backplane, these components enable systems with up to 50 Tb/s bandwidth in a 2U standard rack mount configuration with industry-leading density, efficiency, and latency. For even tighter co-integration of optical interconnects with switch and processor ASICs, we discuss photonic multichip module and interposer packaging technologies that will further improve system energy efficiencies and overcome impending system I/O bottlenecks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.