In the past few decades, feedforward neural networks have gained much attraction in their hardware implementations. However, when we realize a neural network in analog circuits, the circuit-based model is sensitive to hardware nonidealities. The nonidealities, such as random offset voltage drifts and thermal noise, may lead to variation in hidden neurons and further affect neural behaviors. This paper considers that time-varying noise exists at the input of hidden neurons, with zero-mean Gaussian distribution. First, we derive lower and upper bounds on the mean square error loss to estimate the inherent noise tolerance of a noise-free trained feedforward network. Then, the lower bound is extended for any non-Gaussian noise cases based on the Gaussian mixture model concept. The upper bound is generalized for any non-zero-mean noise case. As the noise could degrade the neural performance, a new network architecture is designed to suppress the noise effect. This noise-resilient design does not require any training process. We also discuss its limitation and give a closed-form expression to describe the noise tolerance when the limitation is exceeded.
Read full abstract