Abstract

Recently, with the widely use of flash memory based solid state drives (SSDs), a lot of studies have been conducted on SSD-based data management, such as index structures, query processing, and buffer management schemes. This paper focuses on buffer schemes for SSD-based database systems. However, differing from previous studies, we concentrate on buffer schemes for tree indexes on SSDs. This work is motivated by the observation that access patterns on index pages are much different from those on data pages. Generally, in a typical tree index, e.g., B+-tree, the root and internal nodes have higher read frequencies than leaf nodes have. However, traditional SSD-oriented buffering methods do not consider this special feature of indexes, and thus is not efficient when used as index buffer management schemes. In this paper, we present a new buffering scheme for tree indexes on SSDs that is named Clean-First and Dirty-Redundant-Write (CFDRW) scheme. The contributions of CFDRW are manifold. First, it assigns priorities to index pages to reflect the differences of access patterns of the nodes in a tree index. Second, it uses priority and recency to detect the hotness of index pages and proposes a new replacement algorithm based on priority and recency of the buffered index pages. Third, it exploits the internal parallelism of SSDs and proposes to write out buffer pages in a coarse granularity, i.e., to write out several pages using one physical I/O operation. We compare our proposal on two commodity SSDs with many previous methods including LRU, LIRS, and six flash-memory-based buffering schemes, by using synthetic and real workloads. The results show that our proposal outperforms the competitors under all workloads and SSDs in terms of various metrics including hit ratio, read count, write count, and elapsed time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call