Abstract

The universal approximation theorem is generalised to uniform convergence on the (noncompact) input space Rn. All continuous functions that vanish at infinity can be uniformly approximated by neural networks with one hidden layer, for all activation functions φ that are continuous, nonpolynomial, and asymptotically polynomial at ±∞. When φ is moreover bounded, we exactly determine which functions can be uniformly approximated by neural networks, with the following unexpected results. Let Nφl(Rn)¯ denote the vector space of functions that are uniformly approximable by neural networks with l hidden layers and n inputs. For all n and all l≥2, Nφl(Rn)¯ turns out to be an algebra under the pointwise product. If the left limit of φ differs from its right limit (for instance, when φ is sigmoidal) the algebra Nφl(Rn)¯ (l≥2) is independent of φ and l, and equals the closed span of products of sigmoids composed with one-dimensional projections. If the left limit of φ equals its right limit, Nφl(Rn)¯ (l≥1) equals the (real part of the) commutative resolvent algebra, a C*-algebra which is used in mathematical approaches to quantum theory. In the latter case, the algebra is independent of l≥1, whereas in the former case Nφ2(Rn)¯ is strictly bigger than Nφ1(Rn)¯.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.