Abstract

In 1991, Hornik proved that the collection of single hidden layer feedforward neural networks (SLFNs) with continuous, bounded, and non-constant activation function σ is dense in C(K) where K is a compact set in ℝs (see Neural Networks, 4(2), 251–257 (1991)). Meanwhile, he pointed out “Whether or not the continuity assumption can entirely be dropped is still an open quite challenging problem”. This paper replies in the affirmative to the problem and proves that for bounded and continuous almost everywhere (a.e.) activation function σ on ℝ, the collection of SLFNs is dense in C(K) if and only if σ is un-constant a.e..

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call