Abstract

One of the great successes of the application of generalized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this paper, we explore the hypothesis that many semantic universals arise because expressions satisfying the universal are easier to learn than those that do not. While the idea that learnability explains universals is not new, explicit accounts of learning that can make good on this hypothesis are few and far between. We propose a model of learning — back-propagation through a recurrent neural network — which can make good on this promise. In particular, we discuss the universals of monotonicity, quantity, and conservativity and perform computational experiments of training such a network to learn to verify quantifiers. Our results are able to explain monotonicity and quantity quite well. We suggest that conservativity may have a different source than the other universals. EARLY ACCESS

Highlights

  • IntroductionThe natural languages of the world exhibit tremendous differences amongst themselves

  • At first glance, the natural languages of the world exhibit tremendous differences amongst themselves

  • The present paper develops the hypothesis that semantic universals are to be explained in terms of learnability, at least in the domain of quantifiers

Read more

Summary

Introduction

The natural languages of the world exhibit tremendous differences amongst themselves. Early in one’s linguistics education, one learns that languages do share tremendous amounts of structure and that the differences can be described, circumscribed, and analyzed. A limitation on the range of possible variation will be a property that all (or, at least almost all) languages share. Arises one of the central questions in linguistic theory: What is the range of variation in human languages? Such a property will be a linguistic universal. Closer to the topic of the present paper is the claim that all languages have syntactic constituents (Noun Phrases) whose semantic function is to express generalized quantifiers. It has been proposed that all languages which have shape adjectives have color and size adjectives. Closer to the topic of the present paper is the claim that all languages have syntactic constituents (Noun Phrases) whose semantic function is to express generalized quantifiers.

Methods
Findings
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.