Abstract

Designing a feedforward neural network has always posed many questions regarding the number of hidden layers and the number of neurons on each hidden layer. While there is no unique and global solution, it is known that the particularities of the problem to solve, especially for classification, offer some guidance about how to solve these questions. One heuristic approach, when only a hidden layer is involved, analyzes how the involved classes are separated from each other using a finite number of hyperplanes, thereby defining the size of the hidden layer on the network. On this article, using computational geometry concepts, an automated and time efficient method is presented and discussed for estimating the quantity of neurons in the hidden layer by computing the number of hyperplanes separating the classes, based on convex hulls and approximation to alpha shapes. Examples on different situations that may arise and the results on using it are illustrated. It can be seen from the results that the proposed method gives very good estimation for the number of hidden neurons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call