Abstract

A multilayer perceptron (MLP) network architecture has been formulated in which two adaptive parameters, the scaling and translation of the postsynaptic function at each node, are allowed to adjust iteratively by gradient-descent. The algorithm has been employed to predict experimental cardiovascular time series, following systematic reconstruction of the strange attractor of the training signal. Comparison with a standard MLP employing identical numbers of nodes and weight learning rates demonstrates that the adaptive approach provides an efficient modification of the MLP that permits faster learning. Thus, for an equivalent number of training epochs there was improved accuracy and generalization for both one- and k-step ahead prediction. The applicability of the methodology is demonstrated for a set of monotonic postsynaptic functions (sigmoidal, upper bounded, and nonbounded). The approach is computationally inexpensive as the increase in the parameter space of the network compared to a standard MLP is small.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.