Abstract

Artificial neural networks grow on the number of applications and complexity, which require a minimization on the number of units for some practical implementations. A particular problem is the minimum number of units that a feed forward neural network needs on its first layer. In order to study this problem, it is defined a family of classification problems following a continuity hypothesis, where inputs that are close to some set of points may share the same category. Given a set S of k −dimensional inputs and let $\mathcal {N}$ be a feed forward neural network that classifies any input in S within a fixed error, there is proved that $\mathcal {N}$ requires ${\Theta } \left (k \right )$ units in the first layer, if $\mathcal {N}$ can solve any instance from the given family of classification problems. Furthermore, this asymptotic result is optimal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call