Abstract

One‐hidden‐layer feedforward neural networks are described as functions having many real‐valued parameters. Approximation properties of neural networks are established (universal approximation property), and the approximation error is related to the number of parameters in the network. The essentially optimal order of approximation error bounds was already derived in 1996. We focused on the numerical experiment that indicates the neural networks whose parameters contain stochastic perturbations gain better performance than ordinary neural networks and explored the approximation property of neural networks with stochastic perturbations. In this paper, we derived the quantitative order of variance of stochastic perturbations to achieve the essentially optimal approximation order and verified the justifiability of our theory by numerical experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call