Abstract

The workload of a data center has the characteristics of complexity and requirement variability. However, in reality, the attributes of network workloads are rarely used by resource schedulers. Failure to dynamically schedule network resources according to workload changes inevitably leads to the inability to achieve optimal throughput and performance when allocating network resources. Therefore, there is an urgent need to design a scheduling framework that can be workload-aware and allocate network resources on demand based on network I/O virtualization. However, in the current mainstream I/O virtualization methods, there is no way to provide workload-aware functions while meeting the performance requirements of virtual machines (VMs). Therefore, we propose a method that can dynamically sense the VM workload to allocate network resources on demand, and can ensure the scalability of the VM while improving the performance of the system. We combine the advantages of I/O para-virtualization and SR-IOV technology, and use a limited number of virtual functions (VFs) to ensure the performance of network-intensive VMs, thereby improving the overall network performance of the system. For non-network-intensive VMs, the scalability of the system is guaranteed by using para-virtualized Network Interface Cards (NICs) which are not limited in number. Furthermore, to be able to allocate the corresponding bandwidth according to the VM’s network workload, we hierarchically divide the VF’s network bandwidth, and dynamically switch between VF and para-virtualized NICs through the active backup strategy of Bonding Drive and ACPI Hotplug technology to ensure the dynamic allocation of VF. Experiments show that the allocation framework can effectively improve system network performance, in which the average request delay can be reduced by more than 26%, and the system bandwidth throughput rate can be improved by about 5%.

Highlights

  • With the rapid development of cloud computing technology, more and more network service centers are developing and migrating to cloud platforms

  • For non-network-intensive virtual machines (VMs), the scalability of the system is guaranteed by using para-virtualized Network Interface Cards (NICs) which are not limited in number

  • This paper proposes a virtual network resource allocation method that can combine the advantages of I/O para-virtualization and Single-Root I/O Virtualization (SR-IOV) technology

Read more

Summary

Introduction

With the rapid development of cloud computing technology, more and more network service centers are developing and migrating to cloud platforms. VMs sharing the same hardware device, the VMM is required to simulate the virtual network card for use by the VM. In the full virtualization technology, the VMM uses pure software to simulate the behavior of the virtual device. The traditional way of KVM’s I/O virtualization is to use QEMU to simulate I/O devices. The QEMU program calls the Hardware Simulation Code to simulate the I/O operation and returns the result to the I/O sharing page, which the VM can read in the KVM module. QEMU pure software simulation can simulate a variety of hardware devices and can provide a completely consistent environment for VMs and hardware platforms, so compatibility is better

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call