Abstract
Graphs have been utilized in various fields because of the development of social media and mobile devices. Various studies have also been conducted on caching techniques to reduce input and output costs when processing a large amount of graph data. In this paper, we propose a two-level caching scheme that considers the past usage pattern of subgraphs and graph connectivity, which are features of graph topology. The proposed caching is divided into a used cache and a prefetched cache to manage previously used subgraphs and subgraphs that will be used in the future. When the memory is full, a strategy that replaces a subgraph inside the memory with a new subgraph is needed. Subgraphs in the used cache are managed by a time-to-live (TTL) value, and subgraphs with a low TTL value are targeted for replacement. Subgraphs in the prefetched cache are managed by the queue structure. Thus, first-in subgraphs are targeted for replacement as a priority. When a cache hit occurs in the prefetched cache, the subgraphs are migrated and managed in the used cache. As a result of the performance evaluation, the proposed scheme takes into account subgraph usage patterns and graph connectivity, thus improving cache hit rates and data access speeds compared to conventional techniques. The proposed scheme can quickly process and analyze large graph queries in a computing environment with small memory. The proposed scheme can be used to speed up in-memory-based processing in applications where relationships between objects are complex, such as the Internet of Things and social networks.
Highlights
Users have created and utilized a large amount of graph data due to the advancement of internet technology and mobile devices [1,2]
We propose a two-level caching strategy that stores subgraphs that are likely to be accessed in consideration of the usage pattern of subgraphs to prevent the caching of low-usage subgraphs and frequent subgraph replacement in the in-memory
The performance evaluationevaluation environment is presented in Table in Theperformance performance evaluation environment presented in we Simulations were conducted through the
Summary
Users have created and utilized a large amount of graph data due to the advancement of internet technology and mobile devices [1,2]. The existing graph caching schemes have proposed a method that stored all neighbor vertices in the cache when accessing an arbitrary vertex [28,29]. In [29], a replacement policy was proposed according to the topology characteristics of graph data It cached the neighbor vertices of the accessed vertex since the neighbor vertices were likely to be used again in the future. We propose a two-level caching strategy that stores subgraphs that are likely to be accessed in consideration of the usage pattern of subgraphs to prevent the caching of low-usage subgraphs and frequent subgraph replacement in the in-memory.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.