Abstract

The authors study the stability and convergence properties of recurrent high-order neural networks (RHONNs) as models of nonlinear dynamical systems. The overall structure of the RHONN consists of dynamical elements distributed throughout the network in the form of dynamical neurons, which are interconnected by high-order connections. It is shown that if a sufficiently large number of high-order connections between neurons is allowed, the RHONN model is capable of approximating the input-output behavior of general dynamical systems to any degree of accuracy. Based on the linear-in-the-weights property of the RHONN model, the authors have developed identification schemes and derived weight adaptive laws for adjustment of the weights. The convergence and stability properties of these weight adaptive laws have been analyzed. In the case of no modeling error, the state error between the system and the RHONN model converges to zero asymptotically. If modeling errors are present, the sigma -modification is proposed as a method of guaranteeing the stability of the overall scheme. The feasibility of applying these techniques has been demonstrated by considering the identification of a simple rigid robotic system. >

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.