Abstract

Artificial Neural Network (ANN) is a widely used machine learning pattern recognition technique in predicting water resources based on historical data. ANN has the ability to forecast close to accurate prediction given the appropriate training algorithm and transfer function along with the model’s learning rate and momentum. In this study, using the Neuroph Studio platform, six models having different combination of training algorithms, namely, Backpropagation, Backpropagation with Momentum and Resilient Propagation and transfer functions, namely, Sigmoid and Gaussian were compared. After determining the ANN model’s input, hidden and output neurons from its respective layers, this study compared data normalization techniques and showed that Min-Max normalization yielded better results in terms of Mean Square Error (MSE) compared to Max normalization. Out of the six models tested, Model 1 which was composed of Backpropagation training algorithm and Sigmoid transfer function yielded the lowest MSE. Moreover, learning rate and momentum value for the models of 0.2 and 0.9 respectively resulted to very minimal error in terms of MSE. The results obtained in this research clearly suggest that ANN can be a viable forecasting technique for medium-term water consumption forecasting.

Highlights

  • Artificial Neural Networks (ANN) is a mathematical model inspired from how brain neurons learn and perform pattern recognition

  • The comparison was made by training the neural network using different combination of transfer functions namely Sigmoid and Gaussian and training algorithms such as Backpropagation, Backpropagation with Momentum and Resilient Propagation with 7 hidden neurons

  • Two normalization techniques namely Min-Max normalization and Max normalization were compared in the water consumption data preparation phase with results showing that Min-Max scaling yielded better results in terms of Mean Square Error (MSE) values

Read more

Summary

Introduction

Artificial Neural Networks (ANN) is a mathematical model inspired from how brain neurons learn and perform pattern recognition. The basic idea of an ANN is that the network learns from the input data and the associated output data with the help of training algorithms and transfer functions [3]-[6]. The output activation values and the target pattern are compared and the error signal is calculated based on the difference between target and calculated pattern This error signal is propagated backwards to adjust network weights so that network will generate correct output for the presented input pattern. The training patterns are presented repeatedly until the error reaches an acceptable value or other convergence criteria are satisfied [5]. As this technique involves performing computation backwards it is named as backpropagation

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.