Abstract

The computation of scalar implicatures is sometimes costly relative to basic meanings. Among the costly computations are those that involve strengthening “some” to “not all” and strengthening inclusive disjunction to exclusive disjunction. The opposite is true for some other cases of strengthening, where the strengthened meaning is less costly than its corresponding basic meaning. These include conjunctive strengthenings of disjunctive sentences (e.g., free-choice inferences) and exactly-readings of numerals. Assuming that these are indeed all instances of strengthening via implicature/exhaustification, the puzzle is to explain why strengthening sometimes increases costs while at other times it decreases costs. I develop a theory of processing costs that makes no reference to the strengthening mechanism or to other aspects of the derivation of the sentence's form/meaning. Instead, costs are determined by domain-general considerations of the grammar's output, and in particular by aspects of the meanings of ambiguous sentences and particular ways they update the context. Specifically, I propose that when the hearer has to disambiguate between a sentence's basic and strengthened meaning, the processing cost of any particular choice is a function of (i) a measure of the semantic complexity of the chosen meaning and (ii) a measure of how much relevant uncertainty it leaves behind in the context. I measure semantic complexity with Boolean Complexity in the propositional case and with semantic automata in the quantificational case, both of which give a domain-general measure of the minimal representational complexity needed to express the given meaning. I measure relevant uncertainty with the information-theoretic notion of entropy; this domain-general measure formalizes how ‘far' the meaning is from giving a complete answer to the question under discussion, and hence gives an indication of how much representational complexity is yet to come. Processing costs thus follow from domain-general considerations of current and anticipated representational complexity. The results might also speak to functional motivations for having strengthening mechanisms in the first place. Specifically, exhaustification allows language users to use simpler forms than would be available without it to both resolve relevant uncertainties and convey complex meanings.

Highlights

  • Scalar implicatures are computed by a general mechanism that reasons about alternative propositions the speaker could have expressed but chose not to

  • There are two costs that I will consider: (i) the a priori complexity of mi as a standalone object, here measured by semantic complexity, and (ii) how well mi resolves relevant uncertainties in c, and how much relevant uncertainty it leaves in ci, where I identify relevant uncertainty with a function of the number of cells mi eliminates from the question-under-discussion in c

  • I will begin by pursuing an idea, to my knowledge first suggested in the context of implicature computation by Bott et al (2012), that the semantic complexity of different pieces of information might be relevant to how hard they are to process

Read more

Summary

Basic and Strengthened Meanings

It is commonly assumed that the ‘basic meaning’ of the sentence in (1)—the meaning as compositionally derived using the lexical items overtly present in the sentence—is the existential meaning ∃ in (1-a) that we learn in introductory logic. Scalar implicatures are computed by a general mechanism that reasons about alternative propositions the speaker could have expressed but chose not to (in this case that Jan ate all of the cookies). It is commonly assumed that there is a function, STR, which computes strengthened meanings by conjoining the sentence S with the negation of some of the alternatives of S, ALT(S). The proposition ∃ ∧ ¬∀ is consistent, and the strengthened meaning ∃ ∧ ¬∀ is derived Innocent exclusion in this case was straightforward, but the mechanism is motivated by cases where non-trivial decisions need to be made about which alternatives to negate. I turn my attention to relating this set of competencetheoretic ideas to performance models

Processing Costs
A Puzzle
Accounting for CUPID
SEMANTIC COMPLEXITY
Boolean Complexity and Processing Costs
Semantic Automata
Norms of Good Conversational Behavior and Processing Costs
How Many Kinds of Cost?
CONCLUDING REMARKS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call