Abstract
Non-Volatile Memory (NVM) has emerged as an alternative to next-generation main memories. Although many tree indices have been proposed for NVM, they generally use B+-tree-like structures. To further improve the performance of NVM-aware indices, we consider integrating learned indexes into NVM. The challenges of such an integration are two fold: (1) existing NVM indices rely on small nodes to accelerate insertions with crash consistency, but learned indices use huge nodes to obtain a flat structure. (2) the node structure of learned indices is not NVM friendly, meaning that accessing a learned node will cause multiple NVM block misses. Thus, in this paper, we propose a new persistent learned index called PLIN. The novelty of PLIN lies in four aspects: an NVM-aware data placement strategy, locally unordered and globally ordered leaf nodes, a model copy mechanism, and a hierarchical insertion strategy. In addition, PLIN is proposed for the NVM-only architecture, which can support instant recovery. We also present optimistic concurrency control and fine-grained locking mechanisms to make PLIN scalable to concurrent requests. We conduct experiments on real persistent memory with various workloads and compare PLIN with APEX, PACtree, ROART, TLBtree, and Fast&Fair. The results show that PLIN achieves 2.08x higher insertion performance and 4.42x higher query performance than its competitors on average. Meanwhile, PLIN only needs ~30 μs to recover from a system crash.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.