Abstract

Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call