Abstract

A connectionist-inspired, parallel processing network is presented which learns, on the basis of (relevantly) sparse input, to assign meaning interpretations to novel test sentences in both active and passive voice. Training and test sentences are generated from a simple recursive grammar, but once trained, the network successfully processes thousands of sentences containing deeply embedded clauses. All training is unsupervised with regard to error feedback – only Hebbian and self-organizing forms of training are employed. In addition, the active–passive distinction is acquired without any supervised provision of cues or flags (in the output layer) that indicate whether the input sentence is in active or passive sentence. In more detail: (1) The model learns on the basis of a corpus of about 1000 sentences while the set of potential test sentences contains over 100 million sentences. (2) The model generalizes its capacity to interpret active and passive sentences to substantially deeper levels of clausal embedding. (3) After training, the model satisfies criteria for strong syntactic and strong semantic systematicity that humans also satisfy. (4) Symbolic message passing occurs within the model’s output layer. This symbolic aspect reflects certain prior language acquistion assumptions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.