Abstract

With in-memory databases (IMDBs), where all data sets reside in main memory for fast processing speed, logging and checkpointing are essential for achieving persistence in data. Logging of IMDBs has evolved to reduce run-time overhead to suit the systems, but this causes an increase in recovery time. Checkpointing technique compensates for these problems with logging, but existing schemes often incur high costs due to reduced system throughput, increased latency, and increased memory usage. In this paper, we propose a checkpointing scheme using validity tracking-based compaction (VTC), the technique that tracks the validity of logs in a file and removes unnecessary logs. The proposed scheme shows extremely low memory usage compared to existing checkpointing schemes, which use consistent snapshots. Our experiments demonstrate that checkpoints using consistent snapshot increase memory footprint by up to two times in update-intensive workloads. In contrast, our proposed VTC only requires 2% additional memory for checkpointing. That means the system can use most of its memory to store data and process transactions.

Highlights

  • In-memory databases (IMDBs) are designed to achieve fast response time by processing data using the main memory, without accessing the disk

  • IMDBs are widely adopted for various applications [1], such as ecommerce online transaction processing (OLTP) services, online games [2], finance [3], and more

  • We focused on a persistence scheme that combines logging and checkpointing, along with a checkpointing algorithm to minimize the memory use increase and provide stable throughput

Read more

Summary

Introduction

In-memory databases (IMDBs) are designed to achieve fast response time by processing data using the main memory, without accessing the disk. For this reason, IMDBs are widely adopted for various applications [1], such as ecommerce online transaction processing (OLTP) services, online games [2], finance [3], and more. IMDBs prefer data replication for fast failover and generally maintain replicas across multiple nodes to achieve high availability [12], [16], [17], [18] Catastrophic failures such as cluster-wide power outages can cause data loss if the data are not in stable storage. The traditional techniques used for database durability are logging and checkpointing

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.