Abstract

The architecture of Artificial Neural Network laid the foundation as a powerful technique in handling problems such as pattern recognition and data analysis. Its data driven, self-adaptive, and non-linear capabilities channel it for use in processing at high speed and ability to learn the solution to a problem from a set of examples. Neural network training has been a dynamic area of research, with the Multi-Layer Perceptron (MLP) trained with Back Propagation (BP) mostly worked on by various researchers. In this study, a performance analysis based on BP training algorithms; gradient descent and gradient descent with momentum, both using the sigmoidal and hyperbolic tangent activation functions, coupled with pre-processing techniques are executed. The Min-Max, Z-Score, and Decimal Scaling pre-processing techniques are analyzed. Results generated from the simulations reveal that pre-processing the data greatly increase the ANN convergence, with Z-Score producing the overall best performance on all datasets

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call