Abstract

In-storage processing technology allows applications to run on embedded processors and accelerators inside solid-state drives (SSDs) for efficient computing distribution. Especially, in pattern matching applications, in-storage computing can be processed quickly due to low data access latency, and the number of I/Os can be reduced by returning only a small amount of results to the host system after processing. Previously proposed in-storage processing is separated into three phases: command decoding, data access, and data processing. In this case, data processing is strictly isolated from data access, and the isolation constraints the utilization of storage. Merging data access and data processing among the phases can enhance the utilization of storage. To efficiently merge them, we propose two-stage in-storage processing and scheduling, especially for the pattern matching application. The first stage processing during data access reduces the second stage processing latency. Also, leveraging the pattern matching results of the first stage processing, our scheduler prioritizes key requests that should return the results to the host system so that they are completed earlier than non-key requests. The proposed scheduling reduces the response time of in-storage processing requests by 52.6 % on average.

Highlights

  • S OLID-STATE DRIVES (SSDs)-based in-storage processing has been applied to pattern matching applications such as search engines [18], [50] and key-value stores [5], [25], [55]

  • After the data are ready in the buffer, the data are processed by the in-storage processing functions and only the results of in-storage processing are returned to the host

  • We propose multi-level scheduling that is suitable for in-storage processing and improves performance and quality of service (QoS)

Read more

Summary

INTRODUCTION

S OLID-STATE DRIVES (SSDs)-based in-storage processing has been applied to pattern matching applications such as search engines [18], [50] and key-value stores [5], [25], [55]. All data after accessing must be stored in the internal buffer, and these operations lower memory efficiency It is because certain data do not need to be sent to the host system depending on the processing results in the pattern matching applications. If such data are not written to the buffer, it can increase memory efficiency. Some of them provide additional techniques to increase the quality of service (QoS) for queries They are constrained to optimizing only data access, regardless of the characteristic of in-storage processing, with different priorities depending on the internal request processing.

BACKGROUND
CONCEPT OF IN-STORAGE PROCESSING
A0 D0 B1 B2 RD B1
TWO-STAGE DATA PROCESSING
TWO-PHASE IN-STORAGE PROCESSING ARCHITECTURE
PATTERN MATCHING DETECTION-GUIDED REORDERING
REORDERING WITH SLACK-AWARE SUB-REQUEST INSERTION
ANTI-STARVATION
IMPLEMENTATION OVERHEAD
PERFORMANCE EVALUATION
IN-STORAGE PROCESSING LATENCY
RELATED WORK
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.