Abstract

This paper introduces a novel macromodeling method based on a recurrent neural network called deep independently recurrent neural network (DIRNN). The proposed method applies to time-domain modeling of nonlinear circuits and components, resulting in better training. It overcomes the vanishing and exploding gradient problems encountered with conventional RNNs. In conventional RNNs, all neurons in each layer are involved in recurrent connections that cause unnecessary connections, increasing the model’s complexity over time and making it hard to train for long-time sequences. To solve this problem, the proposed DIRNN’s neurons are independent of each other in recurrent connections, because each neuron only receives connections from its own previous hidden state. The validity of the proposed method is verified by modeling two nonlinear circuit examples, namely, a multi-stage driver terminated by a multi-line interconnect, and an on-chip voltage generator.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call