Abstract

In this paper, for the first time, the deep gated recurrent unit (Deep GRU) is used as a new macromodeling approach for nonlinear circuits. Similar to Long Short-Term Memory (LSTM), the GRU has gating units that control the information flow and makes the network less prone to the vanishing gradient problem. Having a smaller number of gates causes GRU to have fewer parameters compared to LSTM leading to better model accuracy. Using the gates leads gradient formulations to have additive nature which helps them to be more resistant to vanishing and consequently learn long sequences of data. The proposed macromodeling method is capable of modeling nonlinear circuits more accurately and using fewer parameters compared to the conventional LSTM macromodeling method. To further improve the GRU performance, a regularization technique called Gaussian dropout is applied in this paper on deep GRU (GDGRU) to reduce the overfitting problem resulting in better test error. Additionally, the models obtained from the proposed techniques are remarkably faster than the original transistor-level models. To verify the superiority of the proposed method, time-domain modeling of three nonlinear circuits is provided. For these circuits, the comparisons of the accuracy and speed between the conventional recurrent neural network (RNN), the LSTM, and the proposed macromodeling methods are provided.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call