Abstract

It is know that the performance of a neural network ensemble is better than the average performance among the individual neural networks in the ensemble. Certainly, not only should the individual neural networks in an ensemble be different from each other, but also cooperative as well. While there are many methods available for training a set of neural networks to be different, only few of them have addressed the issue of cooperations among the trained individual neural networks. In this paper, different learning targets are set for the individual neural networks on learning the same data point in negative correlation learning. In the original negative correlation learning, the same error function was applied on all the individual neural networks in an ensemble. It would lead the individual neural networks to be either too similar or too different if the learning would be conducted for long. With different learning targets, negative correlation learning would be able to have a stable performance even if the learning would have to be run for long. Such stable performance were analyzed in terms of both performance and cooperations in this paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call