Abstract

The memory over optical network (MONet) system is a disaggregated data center architecture where serial (HMC)/parallel (DDR4) memory resources can be accessed over optically switched interconnects within and between racks. An FPGA/ASIC-based custom hardware IP (ReMAT) supports heterogeneous memory pools, accommodates optical-to-electrical conversion for remote access, performs the required serial/parallel conversion, and hosts the necessary local memory controller. An optically interconnected HMC-based (serial I/O type) memory card is accessed by a memory controller embedded in the compute card, simplifying the hardware near the memory modules. This substantially reduces overheads on latency, cost, power consumption, and space. We characterize CPU–memory performance by experimentally demonstrating the impact of distance, number of switching hops, transceivers, channel bonding, and bit rate per transceiver on the bit error rate, power consumption, additional latency, sustained remote memory bandwidth/throughput (using industry standard benchmark STREAMS), and cloud workload performance (such as operations per second, average added latency, and retired instructions per second memcached with YCSB cloud workloads). MONet pushes the CPU–memory operational limit from a few centimeters to tens of meters, yet applications can experience as low as 10% performance penalty (at 36 m) compared to a direct-attached equivalent. Using the proposed parallel topology, a system can support up to 100,000 disaggregated cards.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call