Abstract
Ensemble learning of K nonlinear perceptrons, which determine their outputs by sign functions, is discussed within the framework of online learning and statistical mechanics. This paper shows that ensemble generalization error can be calculated by using two order parameters, that is, the similarity between a teacher and a student, and the similarity among The differential equations that describe the dynamical behaviors of these parameters are derived analytically in the cases of Hebbian, perceptron and AdaTron learning. These three rules show different characteristics in their affinity for ensemble learning, that is maintaining variety among students. Results show that AdaTron learning is superior to the other two rules.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have