Abstract

Synchronously logging updates to persistent storage first and then asynchronously committing these updates to their rightful storage locations is a well-known and heavily used technique to improve the sustained throughput of write-intensive disk-based data processing systems, whose latency and throughput accordingly are largely determined by the latency and throughput of the underlying logging mechanism. The conventional wisdom is that logging operations are relatively straightforward to optimize because the associated disk access pattern is largely sequential. However, it turns out that to achieve both high throughput and low latency for fine-grained logging operations, whose payload size is smaller than a disk sector, is extremely challenging. This paper describes the experiences and lessons we have gained from building a disk logging system that can successfully deliver over 1.2 million 256-byte logging operations per second, with the average logging latency below 1 msec.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.