Abstract

In-memory databases (IMDBs) store all working data in main memory, which makes memory accesses become the dominant factor of the whole system performance. Micro-architectural studies of mainstream in-memory on-line transaction processing (OLTP) systems show that more than half of the execution time goes to memory stalls. Moreover, for IMDBs that adopt aggressive transaction compilation optimizations, data misses from the last-level cache (LLC) are responsible for the majority of the overall stall time. In this paper, through profiling analysis of IMDBs we observe that index access misses dominate LLC data misses. Based on the key observation that adjacent keys tend to follow similar traversal paths in ordered index searches, we propose the path prefetching to mitigate LLC misses induced by ordered index searches, which records mappings between keys and their traversal paths and then generate prefetches for future same/adjacent keys. Experimental results show that for ordered index searches the proposed path prefetcher provides an average speedup of 27.4% over the baseline with no prefetching.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call