Abstract

In-situ timing error detection and correction mechanisms (such as Razor) monitor the performance of actual datapaths, and are believed more resilient in adaptive voltage scaling (AVS) systems, especially when considering local variations. However, Razor has serious hold time problems, of which the overwhelming buffer padding makes it infeasible in advanced process technologies. Pre-error (or in-situ canary) detection was then proposed as an alternative. In addition, sophisticated error correction is no longer needed accordingly. The Markov chain model was proposed by independent researchers to design the pre-error AVS controller to explicitly trade quality for energy, where the input patterns are assumed to have a Gaussian delay distribution. For error-tolerant applications where few errors are acceptable, pre-error AVS is shown more efficient than Razor-based approaches. In this paper, a pre-error AVS system has been constructed on a 28nm FPGA platform with programmable power supply. It is observed that when the delay distributions are time-varying and non-Gaussian, overoptimistic voltage scaling (OVS) can occur and may lead to serious problems in pre-error AVS. To resolve the OVS problem, we propose to insert certain logics to collect the delay criticality information, which is fed into a Q-learning model to create the learning-based AVS. Experimental results show that the proposed scheme saves 10.50% power while reducing 0.16% error rate for random inputs and saves 12.51% power while reducing 0.06% error rate for non-random inputs, compared with the original pre-error AVS that assumes a static Gaussian delay distribution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call