Abstract

Power densities and temperatures in today's high performance circuits have reached alarmingly high levels due to increased scaling in feature sizes. Subsequently, the various techniques used to keep them under control have also created "zones" of varying temperatures, thus contributing to temperature gradients inside the chip. These gradients have detrimental effects on the delay of wires, as resistance in metals increases with temperature. Clock nets are extremely susceptible to this effect, since they run through the entire chip. Different techniques have been proposed to counter the impact of temperature on clock speed; they range from re-designing the clock network assuming a stationary profile to more adaptive solutions that allow to dynamically compensate the clock skew through replacement of the original buffers with a specially designed counterpart, called tunable delay buffers (TDBs). Dynamic skew management based on TDBs calls for the presence on the chip of a thermal management unit (TMU), whose purpose is that of periodically choosing the actual delay that each TDB must provide in order to achieve skew optimization. Preliminary implementations of such a unit for basic assumptions on the distribution of sensors and their accuracy have indicated negligible impact on the original design. This work aims at exploring in detail several issues related to TMU design, pivoting on the fact that sensor distribution and its accuracy could in fact impact the design in a significant way depending on the design. We provide the results of a careful exploration we have performed on a meaningful case study, quantifying values for area and power consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call