Abstract
Contemporary semantics has uncovered a sophisticated typology of linguistic inferences, characterized by their conversational status and their behavior in complex sentences. This typology is usually thought to be specific to language and in part lexically encoded in the meanings of words. We argue that it is neither. Using a method involving "composite" utterances that include normal words alongside novel nonlinguistic iconic representations (gestures and animations), we observe successful "one-shot learning" of linguistic meanings, with four of the main inference types (implicatures, presuppositions, supplements, homogeneity) replicated with gestures and animations. The results suggest a deeper cognitive source for the inferential typology than usually thought: Domain-general cognitive algorithms productively divide both linguistic and nonlinguistic information along familiar parts of the linguistic typology.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the National Academy of Sciences of the United States of America
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.