Abstract

The training algorithm of Wavelet Neural Networks (WNN) is a bottleneck which impacts on the accuracy of the final WNN model. Several methods have been proposed for training the WNNs. From the perspective of our research, most of these algorithms are iterative and need to adjust all the parameters of WNN. This paper proposes a one-step learning method which changes the weights between hidden layer and output layer of the network; meanwhile, the wavelet function parameters are randomly assigned and kept fixed during the training process. Besides the simplicity and speed of the proposed one-step algorithm, the experimental results verify the performance of the proposed method in terms of final model accuracy and computational time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call