Abstract

Caching and prefetching techniques have been used for decades in database engines and file systems to improve the performance of I/O intensive application. A prefetching algorithm typically benefits from the system's latencies by loading into main memory elements that will be needed in the future, speeding-up data access. While these solutions can bring a significant improvement in terms of execution time, prefetching rules are often defined at the data-level, making them hard to understand, maintain, and optimize. In addition, low-level prefetching and caching components are difficult to align with scalable model persistence frameworks because they are unaware of potential optimizations relying on the analysis of metamodel-level information, and are less present in NoSQL databases, a common solution to store large models. To overcome this situation we propose PrefetchML, a framework that executes prefetching and caching strategies over models. Our solution embeds a DSL to configure precisely the prefetching rules to follow, and a monitoring component providing insights on how the prefetch-ing execution is working to help designers optimize his performance plans. Our experiments show that PrefetchML is a suitable solution to improve query execution time on top of scalable model persistence frameworks. Tool support is fully available online as an open-source Eclipse plugin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call