Abstract

AbstractIn this work the classification efficiency of the feed-forward neural network architecture is analyzed by using various different activation functions for the neurons of hidden and output layer and varying the number of neurons in the hidden layer. 250 numerals were gathered form 35 people to create the samples. After binarization, these numerals were clubbed together to form training patterns for the neural network. Network was trained to learn its behavior by adjusting the connection strengths at every iteration. Experiments were performed by selecting all combinations of two activation functions logsig and tansig for the neurons of the hidden and output layers and the results revealed that as the number of neurons in the hidden layer is increased, the network gets trained in small number of epochs and the percentage recognition accuracy of the neural network was observed to increase up to a certain level and then it starts decreasing when number of hidden neurons exceeds a certain level due to overfitting.KeywordsNumeral RecognitionMLPHidden LayersBackpropagationActivation Functions

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call