Abstract

The problem of parametrizing single hidden layer scalar neural networks with continuous activation functions is investigated. A connection is drawn between realization theory for linear dynamical systems, rational functions, and neural networks that appears to be new. A result of this connection is a general parametrization of such neural networks in terms of strictly proper rational functions. Some existence and uniqueness results are derived. Jordan decompositions are developed, which show how the general form can be expressed in terms of a sum of canonical second order sections. The parametrization may be useful for studying learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call