Abstract

Wavelet transformation is a powerful method of signal processing which uses decomposition of the studied signal over a special basis with unique properties, the most important of which are its compactness and multiresolution: wavelet functions are produced from the mother wavelet by transition and dilation. Wavelet neural networks (WNN) are a family of approximation algorithms that use wavelet functions to decompose the approximated function. If only approximation and no inverse transformation is needed, the values of transition and dilation coefficients may be determined during network training, and the windows corresponding to various wavelet functions may overlap, making the whole system much more efficient. Here we present a new type of a WNN – Adaptive Window WNN (AWWNN), in which window positions and wavelet levels are determined with a special iterative procedure. Two modifications of AWWNN are tested against linear model and multi-layer perceptron on Mackey-Glass benchmark prediction problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call