Abstract

This letter identifies original independent works in the domain of randomization-based feedforward neural networks. In the most common approach, only the output layer weights require training while the hidden layer weights and biases are randomly assigned and kept fixed. The output layer weights are obtained using either iterative techniques or non-iterative closed-form solutions. The first such work (abbreviated as RWNN) was published in 1992 by Schmidt et al. for a single hidden layer neural network with sigmoidal activation. In 1994, a closed form solution was offered for the random vector functional link (RVFL) neural networks with direct links from the input to the output. On the other hand, for radial basis function neural networks, randomized selection of basis functions’ centers was used in 1988. Several works were published thereafter, employing similar techniques but with different names while failing to cite the original or relevant sources. In this letter, we make an attempt to identify and trace the origins of such randomization-based feedforward neural networks and give credits to the original works where due and hope that the future research publications in this field will provide fair literature review and appropriate experimental comparisons. We also briefly review the limited performance comparisons in the literature, two recently proposed new names, randomization-based multi-layer or deep neural networks and provide promising future directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call