Among binary unit-based constructive algorithms, the Sequential Learning is particularly interesting for many reasons, the most significant one being its ability to treat real valued inputs without preprocessing. However, due to the construction process, the classical algorithms derived from the Perceptron cannot be used for learning each unit of the hidden layer. The BCP Max, recently proposed, appears as the first efficient heuristic algorithm to perform the particular neuron training in the Sequential Learning. But the BCP Max principles can be easily extended to the classical Perceptron derivatives usually used in constructive algorithms. In this paper, we show how to extend the Thermal Perceptron, the Pocket algorithm, the Ratchet and the simple Perceptron to train a neuron in the Sequential Learning. Finally all solutions are compared.
Read full abstract