Abstract

We examine the closedness of sets of realized neural networks of a fixed architecture in Sobolev spaces. For an exactly m-times differentiable activation function ρ, we construct a sequence of neural networks (Φn)n∈N whose realizations converge in order-(m−1) Sobolev norm to a function that cannot be realized exactly by a neural network. Thus, sets of realized neural networks are not closed in order-(m−1) Sobolev spaces Wm−1,p for p∈[1,∞). We further show that these sets are not closed in Wm,p under slightly stronger conditions on the mth derivative of ρ. For a real analytic activation function, we show that sets of realized neural networks are not closed in Wk,p for anyk∈N. The nonclosedness allows for approximation of non-network target functions with unbounded parameter growth. We partially characterize the rate of parameter growth for most activation functions by showing that a specific sequence of realized neural networks can approximate the activation function’s derivative with weights increasing inversely proportional to the Lp approximation error. Finally, we present experimental results showing that networks are capable of closely approximating non-network target functions with increasing parameters via training.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call