The increasing need to run applications for significant data analytics, and the augmented demand of useful tools for big data computing systems has resulted in a cumulative necessity for efficient platforms with high performance and realizable power consumption, for example, chip multiprocessors. Correspondingly, due to the demand for features like shrinkable sizes, and the concurrent need to pack increasing numbers of transistors into a single chip, has led to serious design challenges, consuming a significant of power within high area densities. We present a reconfigurable hybrid cache system for last level cache by the integration of emerging designs, such as STT-RAM with SRAM memories. This approach consists of two phases: off-time and on-time. In off time, training NN is implemented while in the on-time phase, a reconfiguration cache uses a neural network learning approach to predict demanded latency of the running application. Experimental results of a three-dimensional chip with 64 cores show that the suggested design under PARSEC benchmarks provides a speedup in terms of the performance at 25% and improves energy consumption by 78.4% in comparison to non-reconfigurable pure SRAM cache architectures.
Read full abstract