Abstract

We present a theoretical analysis of a version of the LAPART adaptive inferencing neural network. Our main result is a proof that the new architecture, called LAPART 2, converges in two passes through a fixed training set of inputs. We also prove that it does not suffer from template proliferation. For comparison, Georgiopoulos et al. have proved the upper bound n-1 on the number of passes required for convergence for the ARTMAP architecture, where n is the size of the binary pattern input space. If the ARTMAP result is regarded as an n-pass, or finite-pass, convergence result, ours is then a two-pass, or fixed-pass, convergence result. Our results have added significance in that they apply to set-valued mappings, as opposed to the usual supervised learning model of affixing labels to classes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.