Abstract

Modern workloads often exceed the processing and I/O capabilities provided by resource virtualization, requiring direct access to the physical hardware in order to reduce latency and computing overhead. For computers interconnected in a cluser, access to remote hardware resources often requires facilitation both in hardware and specialized drivers with virtualization support. This limits the availability of resources to specific devices and drivers that are supported by the virtualization technology being used, as well as what the interconnection technology supports. For PCI Express (PCIe) clusters, we have previously proposed Device Lending as a solution for enabling direct low latency access to remote devices. The method has extremely low computing overhead, and does not require any application- or device-specific distribution mechanisms. Any PCIe device, such as network cards disks, and GPUs, can easily be shared among the connected hosts. In this work, we have extended our solution with support for a virtual machine (VM) hypervisor. Physical remote devices can be “passed through” to VM guests, enabling direct access to physical resources while still retaining the flexibility of virtualization. Additionally, we have also implemented multi-device support, enabling shortest-path peer-to-peer transfers between remote devices residing in different hosts.Our experimental results prove that multiple remote devices can be used, achieving bandwidth and latency close to native PCIe, and without requiring any additional support in device drivers. I/O intensive workloads run seamlessly using both local and remote resources. With our added VM and multi-device support, Device Lending offers highly customizable configurations of remote devices that can be dynamically reassigned and shared to optimize resource utilization, thus enabling a flexible composable I/O infrastructure for VMs as well as bare-metal machines.

Highlights

  • The demand for processing power and I/O resources in a cluster may, to a large degree, vary over time

  • For clusters of machines interconnected with PCI Express (PCIe), we propose a different strategy to efficient resource sharing called Device Lending [1, 2]

  • As virtual machine (VM) pass-through require the use of an I/O Memory Management Unit (IOMMU) on the lending system, we focus on the impact I/O address virtualization has on performance with regards to longer data paths

Read more

Summary

Introduction

The demand for processing power and I/O resources in a cluster may, to a large degree, vary over time. Device Lending exploits the memory addressing capabilities inherent in PCIe in order to decouple devices from the hosts they physically reside in, without requiring any application- or devicespecific distribution mechanisms This decoupling allows a remote resource to be used by any machine in the cluster as if it is locally installed, without requiring any modifications to device drivers or application software. We have extended our Linux Kernel-based virtual machine (KVM) support from [2] with a mechanism for probing the memory used by the VM guest in order to dynamically detect the guest physical memory layout This makes it possible to map device memory regions for other pass-through devices, without requiring any manual configuration of the VM instance.

PCIe overview
Memory addressing and forwarding
Virtualization support and pass-through
Non-transparent bridging
Related work
Virtualization approaches
Partitioning the fabric
Device lending
Supporting virtual machine borrowers
Supporting multiple devices and peer-topeer
Performance evaluation
IOMMU performance penalty
Native peer-to-peer evaluation
Bare-metal bandwidth evaluation
Bare-metal latency evaluation
VM peer-to-peer evaluation
VM bandwidth evaluation
VM latency evaluation
Pass-through NVMe experiments
Image classification workload
Discussion
VM migration
Security considerations
Findings
Interrupt forwarding
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.