Abstract

This paper empirically analyses the impact of changes to the set of training examples on the neural network error surface. Specific quantitative characteristics of the error surface related to properties such as ruggedness, modality and structure are measured and visualized as the set of training examples changes. Both the case of random subsampling and active learning from a fixed dataset are examined, producing ten different training scenarios. For each scenario eleven error surface characteristics are calculated for five common benchmark problems. The results demonstrate that the error surface characteristics calculated using only a subsample of the available data commonly do not generalize to that of the full dataset. The observed error surface characteristics are significantly impacted by the particular set of examples used to calculate error. Some error surface characteristics are significantly altered by small changes in the set of examples used to calculate error. The main finding from this study is that when the set of training examples may change during training the training of a neural network is in essence a dynamic optimization problem, suggesting that optimization algorithms developed specifically to solve dynamic optimization problems may be more efficient at training neural networks under such conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call