Abstract

Adaption of parameters during neural network training according to the actual shape of the error surface is supposed to be a powerful instrument to enforce convergence and to decrease time consumption of neural network training.This paper presents an analysis how fuzzy adaption of training parameters (learning rate and momentum) could accelerate backpropagation learning in feedforward networks.It summarizes first experiences of this approach if applied to a neural network simulator for pattern recognition, especially how to set up an appropriate fuzzy rulebase and how to choose efficient fuzzy sets.KeywordsFuzzy ControlDecision TableNeural Network TrainingParameter AdaptionTraining ParameterThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call