Abstract
AbstractIn this work the classification efficiency of the feed-forward neural network architecture is analyzed by using various different activation functions for the neurons of hidden and output layer and varying the number of neurons in the hidden layer. 250 numerals were gathered form 35 people to create the samples. After binarization, these numerals were clubbed together to form training patterns for the neural network. Network was trained to learn its behavior by adjusting the connection strengths at every iteration. Experiments were performed by selecting all combinations of two activation functions logsig and tansig for the neurons of the hidden and output layers and the results revealed that as the number of neurons in the hidden layer is increased, the network gets trained in small number of epochs and the percentage recognition accuracy of the neural network was observed to increase up to a certain level and then it starts decreasing when number of hidden neurons exceeds a certain level due to overfitting.KeywordsNumeral RecognitionMLPHidden LayersBackpropagationActivation Functions
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.