Abstract

The recursive least squares (RLS) learning algorithm for multilayer feedforward neural networks uses a sigmoid nonlinearity at node outputs. It is shown that by using a piecewise linear function at node outputs, the algorithm becomes faster. The modified algorithm improves computational efficiency and by preserving matrix symmetry it is possible to avoid explosive divergence, which is normally seen in the conventional RLS algorithm due to the finite precision effects. Also the use of this piecewise linear function avoids the approximation, which is otherwise necessary in the derivation of the conventional algorithm with sigmoid nonlinearity. Simulation results on the XOR problem, 4–2–4 encoder and function approximation problem indicate that the modified algorithm reduces the occurrence of local minima and improves the convergence speed compared to the conventional RLS algorithm. A nonlinear system identification and control problem is considered to demonstrate the application of the algorithm to complex problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.