Abstract

In this work, performance of wavelet neural network (WNN) and adaptive neuro-fuzzy inference system (ANFIS) models were compared with small data sets by different criteria such as second order corrected Akaike information criterion (AICc), Bayesian information criterion (BIC), root mean squared error (RMSE), mean absolute relative error (MARE), coefficient of determination (R2), external Q2 function (QF32) and concordance correlation coefficient (CCC). Another criterion was the over-fitting. Ten data sets were selected from literature and their data were divided into training, test, and validation sets. Network parameters were optimized for WNN and ANFIS models and the best architectures with the lowest errors were selected for each data set. A precise survey of the number of permitted adjustable parameters (NPAP) and the total number of adjustable parameters (TNAP) in WNN and ANFIS models was shown that 60% of the ANFIS models and 30% of the WNN models had over-fitting. As a rule of thumb, to avoid over-fitting it is suggested that the ratio of the number of observations in training set to the number of input neurons must be greater than 10 and 20 for WNN and ANFIS, respectively. The smaller ratio required in WNN indicates its flexibility vs. ANFIS that relates to differences in structure and connections in the both networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.