Abstract

For a variety of applications from mobile to high-performance computing, the power consumption of very-large-scale-integrated (VLSI) circuits is a serious issue. The scaling rule has been a paradigm for miniaturizing complementary metal-oxide-semiconductor (CMOS) field-effect-transistors (FETs) in VLSI circuits for a long period. In the ideal scaling rule, the supply voltage Vdd should decrease in proportion to the miniaturization of the transistor. This Vdd reduction has roughly been successful so far. In extremely scaled transistors such as those in the 45-nm logic node and beyond, however, it is very difficult to further decrease Vdd. Unless Vdd is reduced with the scaling rule, the power consumption of the LSI will increase significantly due to an increase in both operational and standby-leakage power (Sakurai, 2004; Chen, 2006). The primary cause of this difficulty is widely recognized as the increase in threshold voltage (Vth) variation of CMOSFETs, because Vdd should be set higher considering the margin to the increased Vth variation (Takeuchi et al., 1997). Variation of transistor characteristics, primarily Vth variation, is increasing substantially in sub-100-nm technologies. This makes the Vdd reduction, required by the scaling rule, difficult, and significantly increases the power consumption of an LSI chip. Here, power consumption P of an inverter, which is the representative LSI unit circuit, is defined as

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call