Abstract

In the domain of artificial neural networks, it is important to know what their representation, classification and generalization capabilities are. There is also a need for time and resource-efficient training algorithms. Here, a new zero-error training method is derived for digital computers and single hidden layer networks. This method is the least upper bound on the number of hidden neurons as well. The bound states that if there are N input vectors expressed as rational numbers, a network having N − 1 neurons in the hidden layer and M neurons at the output represents a bounded function F: RD→RM for all input vectors. Such a network has massively shared weights calculated by 1 + M regular systems of linear equations. Compared to similar approaches, this new method achieves a theoretical least upper bound, is fast, robust, adapted to floating-point data, and uses few free parameters. This is documented by theoretical analyses and comparative tests. In theory, this method provides a new constructional proof of the least upper bound on the number of hidden neurons, extends the classes of supported activation functions, and relaxes conditions for mapping functions. Practically, it is a non-iterative zero-error training algorithm providing a minimum number of neurons and layers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call