Last level cache (LLC), a major contender of chip area, exhibits the highest sensitivity to soft error. Block reuse prediction is used to exhibit selective protection of LLC blocks. However, traditional reuse predictors fail to accurately understand the cache access pattern. This work explores machine learning for block reuse prediction that realizes the features that affect reuse likelihood and therefore, better predicts the reusable blocks. To off-load the reuse calculation from the processing cores, a dedicated reuse predictor engine is designed. This engine implements a deep neural network with unsupervised learning and calculates reuse probabilities offline without affecting the execution path. Priority blocks with high reuse likelihood are protected using replications in shared LLC. Low priority blocks are invalidated to reduce their vulnerable time. A replication aware replacement policy is also developed to protect the replicated/reserved blocks from eviction. The scheme is evaluated in Multi2Sim 5.0 simulation framework with SPEC-CPU benchmarks considering bulk-CMOS, FDSOI and FinFET technologies. The results indicate 41.29% and 43.14% reductions in miss rate, 92.29% and 89.56% reductions in vulnerability, 5.73% and 6.31% increase in write-back rate, 17.55% and 16.27% reductions in average memory access time, 2.17% and 1.54% increase in Instructions-per-cycle (IPC) and 1.31% and 0.77% reductions in normalized-execution-cycle over the baseline (with SECDED protection) for integer and floating point benchmarks respectively. Marginal increase in power consumption by 1.33% and dynamic energy by 0.43% are observed at 6.32% area overhead.
Read full abstract