Abstract

What makes a language a natural language? A longstanding tradition in generative grammar holds that a language is natural just in case it is learnable under a constellation of auxiliary assumptions about input evidence available to children. Yet another approach seeks some key mathematical property that distinguishes the natural languages from all possible symbol-systems. With some exceptions for example, Chomsky's demonstration that a complete characterization of our grammatical knowledge lies beyond the power of finite state languages the mathematical approach has not provided clear-cut results. For example, for a variety of reasons we cannot say that the predicate is context-free characterizes all and only the natural languages. Still another use of mathematical analysis in linguistics has been to diagnose a proposed grammatical formalism as too powerful (allowing too many grammars or languages) rather than as too weak. Such a diagnosis was supposed by some to follow from Peters and Ritchie's demonstration that the theory of transformational grammar as described in Chomsky's Aspects of the Theory of Syntax could specify grammars to generate any recursively enumerable set. For some this demonstration marked a watershed in the formal analysis transformational grammar. One general reaction (not prompted by the Peters and Ritchie result alone) was to turn to other theories of grammar designed to explicitly avoid the problems of a theory that could specify an arbitrary Turing machine computation. The proposals for generalized phrase structure grammar (GPSG) and lexical-functional grammar (LFG) have explicitly emphasized this point. GPSG aims for grammars that generate context-free languages (though there is some recent wavering on this point; see Pullum 1984); LFG, for languages that are at worst context-sensitive. Whatever the merits of the arguments for this restriction in terms of weak generative capacity and they are far from obvious, as discussed at length in Berwick and Weinberg (1983) one point remains: the switch was prompted by criticism of the nearly two-decades old Aspects theory. Much has changed in transformational grammar in twenty years. Modern transformational grammars no longer contain swarms of individual rules such as Passive, Raising, or Dative. The modern government-binding (GB) theory does not reconstruct a deep structure, does not contain powerful deletion rules, and has introduced a whole host of new constraints. Given these sweeping changes, it would seem appropriate, then, to re-examine the Peters and Ritchie result, and compare the power of the newer GB-style theories to these other current linguistic theories. That is the aim of this paper. The basic points to be made are these:

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call