Abstract

A recurrent Sigma-Pi-linked back-propagation neural network is presented. The increase of input information is achieved by the introduction of “higher-order≓ terms, that are generated through functional-linked input nodes. Based on the Sigma-Pi-linked model, this network is capable of approximating more complex function at a much faster convergence rate. This recurrent network is intensively tested by applying to different types of linear and nonlinear time-series. Comparing to the conventional feedforward BP network, the training convergence rate is substantially faster. Results indicate that the functional approximation property of this recurrent network is remarkable for time-series applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call