Abstract

In this study, we designed a KM-Cluster-based pattern adaptive prefetching mechanism for the last-level cache structure to support real time big data management. The goal is to predict future memory access patterns aggressively and accurately through a new self-learning prefetching engine model. The pattern adaptive last-level cache consisted of three major parts: the last-level cache, the first-level prefetching buffer (FLPB) and the second-level prefetching buffer (SLPB). The SLPB efficiently manages the history records of cache blocks evicted from the last-level cache through a self-learning mechanism. A K-means clustering algorithm is used as an SLPB prefetching scheme. Hybrid main memory is constructed using a small portion of the DRAM buffer space and primarily NAND-Flash memory space. The overall performance of our proposed model is evaluated for OpenStack Swift and in-memory database application-Redis. Experimental results show that the proposed architecture reduces the total execution time by 20.96% and power consumption by 31.9% compared to the same last-level cache size with no SLPB structure.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.