Abstract

I/O access conflicts make utilization within NVMe SSDs seriously low, which introduces unpredictable performance loss of NVMe SSDs. Although existing works adopt I/O isolation or conflict-aware I/O scheduling to avoid access conflicts, they can result in an unbalanced utilization and reduce the lifetime of NVMe SSDs. In this paper, we design and implement CFIO, a low-overhead conflict-aware I/O mechanism that achieves conflict-free I/Os to exploit the internal parallelism in NVMe SSDs. CFIO improves PU utilization and reduces I/O latency with two novel mechanisms. First, a conflict-free (CF) lane is proposed to eliminate conflicts by dividing I/O requests into conflict-free PU queues based on physical addresses. The PU queues correspond to the PU resources within the NVMe SSDs. Second, a k-RR scheduler is designed to dispatch reading and writing requests to NVMe SSDs in batches and separately. K-RR scheduler can fully exploit the internal parallelism of NVMe SSDs and form an I/O pipeline based on the dual registers of PU. Finally, we integrate CFIO into the LightNVM with Open-Channel NVMe SSD (OCSSD) and compare it with several existing solutions. Our evaluations show that CFIO improves the throughput of OCSSD by 19.32% and reduces its tail latency by 23.71%, compared to state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call