Abstract

Applications have different preferences for caches, sometimes even within the different running phases. Caches with fixed parameters may compromise the performance of a system. To solve this problem, we propose a real-time adaptive reconfigurable cache based on the decision tree algorithm, which can optimize the average memory access time of cache without modifying the cache coherent protocol. By monitoring the application running state, the cache associativity is periodically tuned to the optimal cache associativity, which is determined by the decision tree model. This paper implements the proposed decision tree-based adaptive reconfigurable cache in the GEM5 simulator and designs the key modules using Verilog HDL. The simulation results show that the proposed decision tree-based adaptive reconfigurable cache reduces the average memory access time compared with other adaptive algorithms.

Highlights

  • The speed of logical processors has been improved rapidly over the last 50 years following Moore’s law, while main memory access latency has been improved slowly, resulting in a growing gap between processor speed and main memory access latency [1]

  • This paper proposes an adaptive reconfigurable cache scheme based on decision tree algorithm: (1) an optimal associativity search scheme is designed and used as the data set to train the optimal associativity decision tree model; (2) an adaptive decision-making algorithm based on the decision tree is used to periodically tune the associativity to reduce the average memory access time of the L2 cache; (3) the dirty block writeback and cache flush operation are introduced after associativity tuning, which avoids the modification of the cache coherent protocol

  • Compared with other adaptive schemes, the adaptive reconfigurable cache scheme based on the decision tree proposed in this paper significantly reduces the average memory access time

Read more

Summary

Introduction

The speed of logical processors has been improved rapidly over the last 50 years following Moore’s law, while main memory access latency has been improved slowly, resulting in a growing gap between processor speed and main memory access latency [1]. Applications such as integrated circuit synthesis, weather forecasting, and fluid simulation are designed to be executed in parallel to take advantage of the multicore processors [4] They usually run on desktop computing platforms or high-performance servers that are not sensitive to energy consumption. Machine learning algorithms are suitable for solving such optimization problems To solve these problems, this paper proposes an adaptive reconfigurable cache scheme based on decision tree algorithm: (1) an optimal associativity search scheme is designed and used as the data set to train the optimal associativity decision tree model; (2) an adaptive decision-making algorithm based on the decision tree is used to periodically tune the associativity to reduce the average memory access time of the L2 cache; (3) the dirty block writeback and cache flush operation are introduced after associativity tuning, which avoids the modification of the cache coherent protocol. Compared with other adaptive schemes, the adaptive reconfigurable cache scheme based on the decision tree proposed in this paper significantly reduces the average memory access time

Related Work
Optimal Associativity Search Scheme
Adaptive Decision-Making Algorithm Based on Decision Tree
Hardware Design
Results and Analysis
Experimental Setup
50 M 50 M
Comparison
ConcluBsilaocnksscholes
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call