Abstract

In recent years, a theory of distributional learning of phrase structure grammars has been developed starting with the simple algorithm presented in (Clark and Eyraud, 2007). These ideas are based on the classic ideas of American structuralist linguistics (Wells, 1947; Harris, 1954). Since that initial paper, the algorithms have been extended to large classes of grammars, notably to the class of Multiple Context-Free grammars by (Yoshinaka, 2011). In this talk we will sketch a theory of language acquisition based on these techniques, and contrast it with other proposals, such as the semantic bootstrapping and parameter setting models. This proposal is based on three recent results: first, a weak learning result for a class of languages that plausibly includes all natural languages (Clark and Yoshinaka, 2013), secondly, a strong learning result for some context-free grammars, that includes a general strategy for converting weak learners to strong learners (Clark, 2013a), and finally a theoretical result that all minimal grammars for a language will have distributionally definable syntactic categories (Clark, 2013b). We argue that we now have all of the pieces for a complete and explanatory theory of language acquisition based on distributional learning and sketch some of the nontrivial predictions of this theory about the syntax and syntax-semantics interface.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call