Abstract

ABSTRACTZhang Neural Networks rely on convergent 1-step ahead finite difference formulas of which very few are known. Those which are known have been constructed in ad-hoc ways and suffer from low truncation error orders. This paper develops a constructive method to find convergent look-ahead finite difference schemes of higher truncation error orders. The method consists of seeding the free variables of a linear system comprised of Taylor expansion coefficients followed by a minimization algorithm for the maximal magnitude root of the formula's characteristic polynomial. This helps us find new convergent 1-step ahead finite difference formulas of any truncation error order. Once a polynomial has been found with roots inside the complex unit circle and no repeated roots on it, the associated look-ahead ZNN discretization formula is convergent and can be used for solving any discretized ZNN based model. Our method recreates and validates the few known convergent formulas, all of which have truncation error orders at most 4. It also creates new convergent 1-step ahead difference formulas with truncation error orders 5 through 8.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call