Abstract

Neural networks have been shown capable of learning arbitrary input-output mappings. However, like most machines learning algorithms, neural networks are adversely affected by sparse training sets, especially with respect to generalization performance. Several approaches to improve generalization performance when only sparse training data are available have been suggested. These include adding noise to training data or to weight updates. One method by Karystinos and Pados first clusters the training data and then generates new training data using a probability density function estimated from the clusters. This paper investigates this method further, especially with respect to the sensitivity of the method to the clustering procedure. We investigate the sensitivity to the number of clusters used by the clustering method, the sensitivity to the clustering method (K-means) itself, and also the use of the minimum differential entropy as an indicator for good cluster choice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call