Abstract

RDMA has become one of the most prominent networking technologies in DCNs by providing high bandwidth and ultra-low latency, especially for data-intensive applications. An important challenge with RDMA is to exploit multi-path for high throughput and reliability. Several studies have been proposed to utilize multi-path in RDMA networks, but they commonly require modification of RDMA NICs, which makes it hard to deploy them in practice. In this paper, we propose a user-level multi-path RDMA (UL-MPRDMA) scheme, in which a flow is partitioned into sub-flows, and transferred via multiple connections to make full use of multiple paths in DCNs. UL-MPRDMA quickly responds to sudden network trouble and congestion by performing dynamic sub-flow scheduling, and also effectively avoids the performance degradation problem due to the limited memory of RDMA NICs without the intervention of CPU. We implement UL-MPRDMA on a real test-bed with commercial RDMA NICs, and show that UL-MPRDMA can achieve 30% higher link utilization than an existing RDMA transport technique.

Highlights

  • IntroductionIn data center networks (DCNs), Remote Direct Memory Access (RDMA) is one of the most promising networking technologies for those data-intensive applications that require high bandwidth and ultra-low latency

  • A S many data-intensive applications such as big data analysis, machine learning, and scientific simulation run in data centers or High Performance Computing (HPC) environment1, the importance of efficient network operation is increasing.In data center networks (DCNs), Remote Direct Memory Access (RDMA) is one of the most promising networking technologies for those data-intensive applications that require high bandwidth and ultra-low latency

  • We evaluate UL-MPRDMA over several congestion control schemes to demonstrate that ULMPRDMA can work with any congestion control scheme

Read more

Summary

Introduction

In data center networks (DCNs), Remote Direct Memory Access (RDMA) is one of the most promising networking technologies for those data-intensive applications that require high bandwidth and ultra-low latency. RDMA provides high performance communication with a direct connection between a local and remote memory. It performs zero-copy operations without the involvement of operating systems by exploiting a hardware NIC to process transport logic. This reduces the data copy overhead and consumption of CPU resources. An application needs to pin virtual memory pages to use RDMA operations, which requires system calls and additional CPU overhead. In [15], a new page management system has been proposed to utilize RDMA without pinning virtual memory

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.