Abstract

Recently, the increasing need to run applications for significant data analytics, and the augmented demand of useful tools for big data computing systems has resulted in a cumulative necessity for efficient platforms with high performance and realizable power consumption, for example, chip multiprocessors (CMPs). Correspondingly, due to the demand for features like shrinkable sizes, and the concurrent need to pack increasing numbers of transistors into a single chip, has led to serious design challenges, consuming a significant of power within high area densities. We present a reconfigurable hybrid cache system for last level cache (LLC) by the integration of emerging designs, such as STT-RAM with SRAM memories. This approach consists of two phases: off- time and on-time. In off time, training NN is implemented while in the on-time phase, a reconfiguration cache uses a neural network (NN) learning approach to predict demanded latency of the running application. Experimental results of a three-dimensional chip with 64 cores show that the suggested design under PARSEC benchmarks provides a speedup in terms of the performance at 25% and improves energy consumption by 78.4% in comparison to non-reconfigurable pure SRAM cache architectures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.