Abstract

The number of applications that can access a storage device simultaneously has increased as a result of the increase in storage capacity and the emergence of hyperscale environments. In multi-application environments, the request for append-only data to storage from applications such as log-structured merge-tree-based key-value (LSMKV) stores can negatively affect the storage-internal buffer hit ratio of other applications. This is because frequently re-accessed data can be evicted from the buffer via the intensive requests of append-only data that are rarely re-accessed. This degradation in the buffer hit ratio increases the storage access latency of applications. Herein, we propose a buffer management method to increase the buffer hit ratio of non-append-only data (or applications) in multi-application environments. The proposed method (1) defines large-sequential writes (that are not overwritten) and all reads on them as append-only input/output (I/O), (2) detects I/O, matching the access pattern of append-only data of LSMKVs, (3) allocates the append-only read/write requests into separate small buffer spaces, and (4) evicts the append-only data from the buffer when free buffer space is required. Because the proposed method stores append-only data of an LSMKV in buffer spaces with a limited size, it can increase the buffer hit ratio of applications that frequently re-access its data. Experimental results show that the proposed method can increase the buffer hit ratio of hot-data-intensive applications and the total buffer hit ratio by 70% and 46.8%, on average, respectively, in comparison to the existing buffer management techniques.

Highlights

  • W ITH the emergence of hyperscale environments such as Internet of Things (IoT) and cloud computing, the number of users that can access a storage device has increased [1]–[3]

  • Append-only data I/O can be detected based on (1) whether the incoming write access pattern matches the write pattern of append-only data or (2) if the read command is requested for the append-only data

  • As append-only data are considered to be cold data in the proposed method, the append-only data I/O was allocated to a separate buffer space, buffer space for append-only write data (BAW), and buffer space for append-only read data (BAR)

Read more

Summary

Introduction

W ITH the emergence of hyperscale environments such as Internet of Things (IoT) and cloud computing, the number of users that can access a storage device has increased [1]–[3]. As the processing performance of computing devices and the capacity of data storage devices have increased, the number of applications that can be simultaneously operated by a user continues to grow. Multiple applications can access one storage device simultaneously [1]. Heavy storage input/output (I/O) for cold data can increase the storage access latency of concurrently operating hot-data-intensive applications. This is because intensive I/O for data with low re-access frequency can evict hot data from the storage-internal buffer. In an LSMKV, all key-value writes are buffered in the memory (the main memory of the host side), and the data in memory are flushed into the storage via large-sequential writes [19].

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.