Abstract
The problem of the pattern selection strategy for neural network training has not yet received much attention. In back propagation training all patterns are usually presented equally often in random order. This paper presents and compares several alternative pattern selection strategies that adapt to the training process. They favor the selection of patterns producing high error values to the disadvantage of the patterns already mastered by the network. The strategies presented are of two types, random and deterministic. In a random strategy, a pattern is selected randomly with some variable probability depending on the state of the training process. In contrast, the deterministic strategies follow predefined, global schemes that increase the presentation frequency of certain patterns by forced repetition. Simulation results for two test problems show that convergence time and learning accuracy can be improved, but only by strategies of the deterministic type.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.