Abstract

Stochastic Configure Networks (SCNs) are an incremental variant of randomly weighted neural networks of which one of the key highlights lies in the constraints while adding hidden layer nodes. Recent studies show two weaknesses of SCNs. One is the redundancy of added hidden layer nodes while the other is the lack of interpretability to the designed constraints. Motivated by overcoming these weaknesses, this paper proposes a new model named Evolving SCN to increase the interpretability of the design for constraints from the viewpoint of evolution by a sampling mechanism and to promote the model compactness by optimizing random weights within the space of constraint parameters. Surprisingly, although an evolving method is used in our model, the effectiveness and efficiency are significantly improved with respect to running time and prediction accuracy in comparison with the existing versions of SCNs. This work makes a first attempt to enhance the interpretability of incrementally adding nodes and simultaneously reduce the redundancy of hidden nodes in SCNs, which brings some new insights to understand the model compactness from the view of Occam's razor and further to the nature of incremental learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call