Abstract

Many flash storage systems divide input/output (I/O) requests that require large amounts of data into sub-requests to exploit their internal parallelism. In this case, an I/O request can be completed only after all sub-requests have been completed. Thus, non-critical sub-requests that are completed quickly do not affect I/O latency. To efficiently reduce I/O latency, we propose a buffer management scheme that allocates buffer space by considering the relationship between the processing time of the sub-request and I/O latency. The proposed scheme prevents non-critical sub-requests from wasting ready-to-use buffer space by avoiding the situation in which buffer spaces that are and are not ready to use are allocated to an I/O request. To allocate the same type of buffer space to an I/O request, the proposed scheme first groups sub-requests derived from the same I/O request and then applies a policy for allocating buffer space in units of sub-request groups. When the ready-to-use buffer space is insufficient to be allocated to the sub-request group being processed at a given time, the proposed scheme does not allocate it to the sub-request group but it instead sets it aside for future I/O requests. The results of the experiments to test the proposed scheme show that it can reduce I/O latency by up to 24% compared with prevalent buffer management schemes.

Highlights

  • To compensate for the difference in throughput between a flash memory device and the host interface, many flash storage systems use parallelism

  • If the number of clean entries is insufficient to be allocated to the sub-request group being processed, the groupaware least recently used (GALRU) uses an eviction policy that selects a dirty entry as the victim entry so that clean entries can be accumulated in the buffer to reduce the I/O latency of future I/O requests

  • To investigate the impact of the buffer management schemes on I/O latency when sub-requests are processed in parallel, we implemented a trace-driven in-house simulator based on Flash-DBsim to simulate various multi-channel, multi-way architectures

Read more

Summary

INTRODUCTION

To compensate for the difference in throughput between a flash memory device and the host interface, many flash storage systems use parallelism. Because the eviction process is performed in a unit of the buffer entry whenever the free buffer entry is insufficient for a given sub-request, clean and dirty data can be stored together in buffer entries allocated for an I/O request that is the parent of the sub-requests In this case, a sub-request assigned with a dirty entry, which is the buffer entry containing dirty data, is likely to be a critical sub-request because the flash device takes a long time for the write operation. If the number of clean entries is insufficient to be allocated to the sub-request group being processed, the GALRU uses an eviction policy that selects a dirty entry as the victim entry so that clean entries can be accumulated in the buffer to reduce the I/O latency of future I/O requests.

BACKGROUND
RELATED WORK
PROPOSED GALRU SCHEME
CASE STUDY
5: Run the DPW-LRU strategy in WR
PERFORMANCE EVALUATION
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.