Abstract
Databases live in multi-level memory hierarchies that include disks, main memories, and several levels of processor caches. Four factors have shifted the performance bottleneck of data-intensive commercial workloads from I/O to the processor and memory subsystem. First, storage systems are becoming faster and more intelligent. Second, modern database storage managers aggressively improve locality through clustering, hide I/O latencies using prefetching, and parallelize disk accesses using data striping. Third, main memories have become much larger and often hold the application's working set. Finally, the increasing memory/processor speed gap has pronounced the importance of processor caches to database performance. This chapter discusses the computer architecture and database literature on understanding and evaluating database application performance on modern hardware. It presents approaches and methodologies used to produce time breakdowns when executing database workloads on modern processors. It also discusses the techniques proposed in the literature to alleviate the problem and their evaluation. The chapter emphasizes the importance and explains the challenges when determining the optimal data placement on all levels of memory hierarchy, and contrast to other approaches such as prefetching data and instructions.
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have