Abstract

We examine two connectionist networks-a fractal learning neural network (FLNN) and a Simple Recurrent Network (SRN)-that are trained to process center-embedded symbol sequences. Previous work provides evidence that connectionist networks trained on infinite-state languages tend to form fractal encodings. Most such work focuses on simple counting recursion cases (e.g., anbn), which are not comparable to the complex recursive patterns seen in natural language syntax. Here, we consider exponential state growth cases (including mirror recursion), describe a new training scheme that seems to facilitate learning, and note that the connectionist learning of these cases has a continuous metamorphosis property that looks very different from what is achievable with symbolic encodings. We identify a property-ragged progressive generalization-which helps make this difference clearer. We suggest two conclusions. First, the fractal analysis of these more complex learning cases reveals the possibility of comparing connectionist networks and symbolic models of grammatical structure in a principled way-this helps remove the black box character of connectionist networks and indicates how the theory they support is different from symbolic approaches. Second, the findings indicate the value of future, linked mathematical and empirical work on these models-something that is more possible now than it was 10years ago.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.