Abstract

This article carries out a comparative study of zero-finding neural networks for nonlinear functions. By taking the view that artificial recurrent neural nets are dynamical systems described by ordinary differential equations (ODEs), new neural nets were recently derived in the literature using a unified control Liapunov function (CLF) approach, after interpreting the zero finding problem as a regulation problem for a closed-loop continuous-time dynamical system. The resulting neural net or continuous-time ODE is discretized by Euler’s method and the discretization step size interpreted as a control which is chosen so as to optimize the decrement in the chosen CLF, along system trajectories. Given the viewpoint adopted in this article, the words dynamical system, ODE and neural net are used interchangeably. For standard test functions of two variables, the basins of attraction are found by numerical simulation, starting from a uniformly distributed grid of initial points. For the chosen test functions, analysis of the basins shows a correlation between regularity of the basin boundaries and the predictability of convergence to a zero. In addition, this analysis suggests how to construct a team algorithm with favorable convergence properties.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.