Abstract

This paper proposed to implement negative correlation learning (NCL) in optimizing the two different learning functions on the two separated subsets without overlapping. Because the two subsets could be randomly generated for each individual neural network (NN), they would be different for every pair of individual NNs in an neural network ensemble (NNE). When the two learning functions in NCL could be optimized separately, each individual NN could avoid the conflicts in learning by always having the unique learning direction on a given data sample. Therefore, each individual NN is clearly aware of its own learning direction on every training data. Such self-awareness is essential to create a set of cooperative NNs for an NNE. Experimental results show that the individual NNs by NCL with such separate learning could remain the difference, and have the stable performance even in the longer training process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call