Abstract

Recursive neural networks are a powerful tool for processing structured data, thus filling the gap between connectionism, which is usually related to poorly organized data, and a great variety of real-world problems, where the information is naturally encoded in the relationships among the basic entities. In this paper, some theoretical results about linear recursive neural networks are presented that allow one to establish conditions on their dynamical properties and their capability to encode and classify structured information. A lot of the limitations of the linear model, intrinsically related to recursive processing, are inherited by the general model, thus establishing their computational capabilities and range of applicability. As a byproduct of our study some connections with the classical linear system theory are given where the processing is extended from sequences to graphs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call