Thinking with words: the role of externalization
Abstract According to Chomsky and followers, natural language is a computational system that generates syntactic structures that are counterfunctional with respect to communication. Consequently, language is more appropriately considered as being “designed” for thought rather than communication. In this paper, we argue that, while natural language, understood as an internal computational system along standard generative lines, is recruited for distinctive human thinking, such recruitment also requires, and is strongly influenced by, a process we dub lexical externalisation . We first show that there is good reason to believe that the atomic items that go to form words essentially include phonological information of externalised words. After that, we explore what externalisation implies with respect to the relation between lexical items and concepts. We suggest that externalisation has profound effects on what concepts we think with, as well as how concepts relate to lexical items (i.e., how thinking processes ultimately relate to linguistic computations).
- Research Article
- 10.6342/ntu.2008.00568
- Jan 1, 2008
The rise of Cognitive Semantics in the 1980s has changed the way many linguists view meaning. Meaning has been shown to be intimately tied to conceptual structures, influenced by cognitive operations that help shape the speaker’s perception and conception on the multiple dimensions of the situation at hand. Despite the advances, we are still in need of a comprehensive account that not only synthesizes core insights from previous research but also explicate how meaning comes about in natural language use. In light of such concern, we shall argue that the Theory of Lexical Concepts and Cognitive Models (henceforth LCCM) proposed by Evans (2006) is up to the challenge. Following the tenets of Cognitive Semantics, LCCM offers a lexical representation that includes two central constructs: lexical concept and cognitive model; the former refers to the conventionalized semantic values associated with a lexical item, whereas the latter subsumes encyclopedic knowledge structures. The two constructs reflect an underlying assumption of LCCM: meaning is largely a function of the utterance in which a word is embedded and the complex processes of lexical concept integration. With the two constructs, LCCM clearly explains how lexical concepts afford access to cognitive models and are integrated to produce the intended interpretations in language use. Furthermore, an explicit set of criteria is presented for identifying lexical concepts, thereby preventing unchecked proliferation of senses. Despite LCCM’s constructs and strengths, our bottom-up analysis of Mandarin verb zou reveals that it is necessary to incorporate key findings from Croft’s (1993) study into LCCM. More specifically, we highlight the importance of conceptual unity of domain and the autonomy-dependence principle in presenting a revised version of LCCM. Through an analysis of zou in non-compositional constructions, we then argue that a non-compositional construction, with its own meanings and fixed schematic form, needs to be treated as a single lexical item in meaning-construction. The lexical concepts of its inner components must be first internally integrated with the construction’s meaning to produce a lexical concept of the construction, which is then integrated and interpreted with the rest of the utterance. Afterwards, we review Evans and Zinken’s (to appear) study on how LCCM can handle figurative language like metaphor and metonymy. While accepting their claims on the distinction between metaphor and metonymy, we again argue for the need of incorporating Croft’s (1993) insights into LCCM, in order to elaborate on the details of meaning-construction processes involving non-literal language use as well as illuminate its motivating principle. Metaphorical and metonymic usage examples of zou are then well explained by our revised version of LCCM, which accentuates the differences in literal and figurative language processing. All in all, this thesis argues that the LCCM framework proposed Evans (2006), though boasting a sound theoretical machinery, needs to incorporate key findings from Croft’s (1993) study to remedy the lingering problems that hinder successful application on usage examples of Mandarin zou. Furthermore, it is demonstrated that the revised version of LCCM we present can adequately expound not only literal use of zou but also its figurative use. Therefore, altogether we have a clear, plausible theory of meaning that not only incorporate key insights from past cognitive-semantic research but also provides unified account for literal and figurative language use in a single model. Most importantly, our revised version of LCCM is proven to be a capable model equipped with the necessary theoretical constructs and mechanisms to explicate how language users mean and understand each other.
- Research Article
6
- 10.1418/38784
- Jul 1, 2012
Asking what can be a substantive word in natural language is closely related to asking what can be a basic lexical concept. However, studies on lexical concepts in cognitive psychology and philosophy and studies on the constitution of lexical items in linguistics have little contact with each other. We argue that current linguistic approaches that decompose lexical items into grammatical structures do not map naturally to plausible models of the corresponding concepts. In particular, we claim that roots, as the purported carriers of lexeme-specific content, cannot encapsulate the conceptual content of a lexical item. Instead, we distinguish syntactic from morphological roots: the former act as differential indices, and the latter are forms which may or may not correlate with a stable meaning. What expresses a lexical concept is a structure which can be of variable size. We explore the view that basic lexical items are syntactically complex but conceptually simplex, and that the structural meaning defined by a grammatical construction constrains the concept associated with it. This can lead to predictive hypotheses about the possible content of lexical items.
- Research Article
555
- 10.1017/s0140525x02000122
- Dec 1, 2002
- Behavioral and Brain Sciences
This paper explores a variety of different versions of the thesis that natural language is involved in human thinking. It distinguishes amongst strong and weak forms of this thesis, dismissing some as implausibly strong and others as uninterestingly weak. Strong forms dismissed include the view that language is conceptually necessary for thought (endorsed by many philosophers) and the view that language is de facto the medium of all human conceptual thinking (endorsed by many philosophers and social scientists). Weak forms include the view that language is necessary for the acquisition of many human concepts and the view that language can serve to scaffold human thought processes. The paper also discusses the thesis that language may be the medium of conscious propositional thinking, but argues that this cannot be its most fundamental cognitive role. The idea is then proposed that natural language is the medium for nondomain-specific thinking, serving to integrate the outputs of a variety of domain-specific conceptual faculties (or central-cognitive "quasimodules"). Recent experimental evidence in support of this idea is reviewed and the implications of the idea are discussed, especially for our conception of the architecture of human cognition. Finally, some further kinds of evidence which might serve to corroborate or refute the hypothesis are mentioned. The overall goal of the paper is to review a wide variety of accounts of the cognitive function of natural language, integrating a number of different kinds of evidence and theoretical consideration in order to propose and elaborate the most plausible candidate.
- Research Article
- 10.1162/coli_r_00155
- Jun 1, 2013
- Computational Linguistics
<b>Interpreting Motion: Grounded Representations for Spatial Language</b> <b>Inderjeet Mani</b>* <b>and James Pustejovsky</b><sup>‡</sup> (*Children's Organization of Southeast Asia and <sup>‡</sup>Brandeis University) Oxford University Press (Explorations in Language and Space series, edited by Emile Van Der Zee), 2012, xiii+166 pp; hardbound, ISBN 978-0-19-960124-0, £60.00
- Research Article
- 10.26483/ijarcs.v8i3.2944
- Apr 30, 2017
- International Journal of Advanced Research in Computer Science
Natural language processing involves computer science, artificial intelligence and computational linguistics concerned with interactions between computers and human (natural) languages. The paper attempts to critically analyse state of the art technology algorithms in the field of Information Extraction and Information Retrieval. Information Extraction is concerned in general with the extraction of semantic information from text. Retrieval, filtering, indexing and other such tools have been built which have been used to accomplish tasks such as named entity recognition, co-reference resolution, relationship extraction, etc. By collating important work systematically, the paper also aims to simplify the process of referencing and literature review for future researchers and developers in the field of Natural Language Processing. Major challenges in NLP including natural language understanding, enabling computers to derive meaning from human or natural language input; natural language generation among others have also been discussed. Keywords: Natural Language Processing, Information Extraction, Information Retrieval, Machine Translation, Natural Language Generation.
- Research Article
3
- 10.1504/ijmic.2013.055428
- Jan 1, 2013
- International Journal of Modelling, Identification and Control
Human-thinking simulated control (HTSC) depends on the human cognition and human control thinking mechanism. Natural language is a human thinking tool that does not depend on mathematical thinking, but the concept of natural language is the tool of human thinking. This paper discusses one of several important concepts drawn from the author’s own work in order to simulate human control thinking. The concepts provide the important basis for the further research of human-thinking simulated control. Strategies of artificial control process and control effect validation are also introduced. A case study proves that the method is effective and successful.
- Book Chapter
5
- 10.1007/978-3-319-41932-9_36
- Jan 1, 2016
Programming languages are indisputably different from natural languages. Natural languages are communicative in both oral and visual modalities and have thousands of unique lexical items, whereas programming languages may rely on only a few hundred lexical items and are solely practiced in the visual modality. Nonetheless, the two share similar properties like lexical items, syntactic structures, rules of discourse, productivity, and recursion. Previous research on the topic of second language acquisition (SLA) principles applied to programming language learning (PLL) is limited, but finds common ground. One promising crossover area is transfer, a strand of research in SLA on the influence of previously learned language(s) in the learning of an additional language. This review of the literature will focus on parallels between these research areas and discuss potential avenues for future research in PLL, including cross-training: leveraging the experience of learning one programming language for learning an additional programming language.
- Research Article
1
- 10.17697/ibmrd/2014/v3i2/51969
- Sep 1, 2014
- IBMRD s Journal of Management & Research
The NLP is closer for interfacing among the peoples knowing different languages. If we consider an example of India there are various peoples talking in various languages. Huge literature is available in different local languages which is not understandable to others in India itself. So we can use Information technology for Natural Language Processing. Natural language processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages; it began as a branch of artificial intelligence. In theory, natural language processing is a very attractive method of human-computer interaction. Natural language understanding is sometimes referred to as an AI-complete problem because it seems to require extensive knowledge about the outside world and the ability to manipulate it. Modern NLP algorithms are grounded in machine learning, especially statistical machine learning. Research into modern statistical NLP algorithms requires an understanding of a number of disparate fields, including linguistics, computer science, and statistics. In this paper we want to study on Role of NLP for Indian Language conversions, like Marathi to Hindi, Hindi to Gujarati etc. If we observe the different languages in India they look similar in different aspects like Grammar, Words, and Alphabets. This paper will discuss the solutions available, problems and challenges in Indian Language conversions.
- Book Chapter
2
- 10.1093/acprof:oso/9780198270126.003.0006
- Jan 24, 2002
Traditional generative grammar makes two related assumptions: first, that lexical items — the stored elements that are combined into larger expressions — enter the combinatorial system by virtue of being inserted into syntactic structures; and second, that lexical items are always words. In the parallel model of Chapter 5, lexical items emerge instead as parts of the interfaces among generative components. Moreover, by taking seriously the question of what is stored in memory, we will arrive at the view that lexical (i.e., stored) items are of heterogeneous sizes, from affixes to idioms and more abstract structures. This reconceptualization of the lexicon leads to striking consequences for linguistic theory, in particular it breaks down some of the traditional distinctions between lexical items and rules of grammar. It also leads to a reconsideration of the formal character of language learning.
- Book Chapter
- 10.1007/978-94-010-0011-6_6
- Jan 1, 2003
A natural language is a language spoken and understood by all normal adult members of a speech community. It stands in contrast to specialized languages, both spoken and written, used within sub-groups of this community to perform specialized tasks, though shared in varying degrees with members outside these sub-groups. Both natural and specialized languages have lexical items whose meaning is specified within the semantic fields just discussed. In addition, they have the resources for combining sentences into blocks of discourse as sequences of sentences about some common topic or related topics. A discourse block can vary in length from a short conversational exchange or haiku poem to a novel or many-volume history of a nation. For discourse formation we resort to the use of linking words to form from sentences a variety of types of combinations.
- Research Article
1
- 10.15837/ijccc.2011.3.2123
- Sep 1, 2011
- International Journal of Computers Communications & Control
<p>I feel honored by the dedication of the Special Issue of IJCCC to me. I should like to express my deep appreciation to the distinguished Co-Editors and my good friends, Professors Balas, Dzitac and Teodorescu, and to distinguished contributors, for honoring me. The subjects which are addressed in the Special Issue are on the frontiers of fuzzy logic.<br /> <br /> The Foreword gives me an opportunity to share with the readers of the Journal my recent thoughts regarding a subject which I have been pondering about for many years - fuzzy logic and natural languages. The first step toward linking fuzzy logic and natural languages was my 1973 paper," Outline of a New Approach to the Analysis of Complex Systems and Decision Processes." Two key concepts were introduced in that paper. First, the concept of a linguistic variable - a variable which takes words as values; and second, the concept of a fuzzy if- then rule - a rule in which the antecedent and consequent involve linguistic variables. Today, close to forty years later, these concepts are widely used in most applications of fuzzy logic.<br /> <br /> The second step was my 1978 paper, "PRUF - a Meaning Representation Language for Natural Languages." This paper laid the foundation for a series of papers in the eighties in which a fairly complete theory of fuzzy - logic-based semantics of natural languages was developed. My theory did not attract many followers either within the fuzzy logic community or within the linguistics and philosophy of languages communities. There is a reason. The fuzzy logic community is largely a community of engineers, computer scientists and mathematicians - a community which has always shied away from semantics of natural languages. Symmetrically, the linguistics and philosophy of languages communities have shied away from fuzzy logic.<br /> <br /> In the early nineties, a thought that began to crystallize in my mind was that in most of the applications of fuzzy logic linguistic concepts play an important, if not very visible role. It is this thought that motivated the concept of Computing with Words (CW or CWW), introduced in my 1996 paper "Fuzzy Logic = Computing with Words." In essence, Computing with Words is a system of computation in which the objects of computation are words, phrases and propositions drawn from a natural language. The same can be said about Natural Language Processing (NLP.) In fact, CW and NLP have little in common and have altogether different agendas.<br /> <br /> In large measure, CW is concerned with solution of computational problems which are stated in a natural language. Simple example. Given: Probably John is tall. What is the probability that John is short? What is the probability that John is very short? What is the probability that John is not very tall? A less simple example. Given: Usually Robert leaves office at about 5 pm. Typically it takes Robert about an hour to get home from work. What is the probability that Robert is home at 6:l5 pm.? What should be noted is that CW is the only system of computation which has the capability to deal with problems of this kind. The problem-solving capability of CW rests on two key ideas. First, employment of so-called restriction-based semantics (RS) for translation of a natural language into a mathematical language in which the concept of a restriction plays a pivotal role; and second, employment of a calculus of restrictions - a calculus which is centered on the Extension Principle of fuzzy logic.<br /> <br /> What is thought-provoking is that neither traditional mathematics nor standard probability theory has the capability to deal with computational problems which are stated in a natural language. Not having this capability, it is traditional to dismiss such problems as ill-posed. In this perspective, perhaps the most remarkable contribution of CW is that it opens the door to empowering of mathematics with a fascinating capability - the capability to construct mathematical solutions of computational problems which are stated in a natural language. The basic importance of this capability derives from the fact that much of human knowledge, and especially world knowledge, is described in natural language.<br /> <br /> In conclusion, only recently did I begin to realize that the formalism of CW suggests a new and challenging direction in mathematics - mathematical solution of computational problems which are stated in a natural language. For mathematics, this is an unexplored territory.</p>
- Book Chapter
1
- 10.1007/978-981-33-6518-6_5
- Jan 1, 2021
In this modern era of big data, according to the researchers, it is estimated that almost 80% of the enterprise data is unstructured data, which means it is in the form of documents, research reports, surveys, articles, slides and even on the emails, etc. In the crucial time our world is where technology reign supreme, data is considered as the most important resources and even though we are surrounded by data, we cannot analyze it because of our traditional data technologies. Although there are new technologies on the way yet that is far from technologies, we need to unleash the full potential of data. Most of the unstructured data which comes from documents, reports, etc. are in the form of language phase, and it can entirely be a natural language sentence. In order for it to be analyzed, we need to transform, extract and process the relevant information from it. Hence for searching or merging the data from different data resources or platforms, semantic technology comes into the picture. Semantics application is steered by knowledge graph and that you do not need to go to hassle of data migration, as semantics connects it. Semantics can be used for all standard-based technologies removing the need for using different technologies for different data forms. While there are different enterprise catalogue products in the market, there is not a solution that fully understands the context of a user’s input, it can be either from a domain perspective, the depth of the knowledge the user is asking for or just comprehensively addressing the questions coming from the user perspective. For visualization purpose, you can compare semantic technology to a Google search; it is comparable in terms of results though Google uses natural language understanding (NLU), natural language processing (NLP) and other rich metadata. Semantic technologies also use some of the key points like metadata enrichment, ontologies and search and graph technologies to achieve the same feat and understanding the question from user perspective. For specific domains, there can be EKG or enterprise knowledge graph which consists of data which can be structured or unstructured. It can use information repositories and ontologies for building better domain-specific graphs. The end goal of semantic technologies is to make machine understand the data; semantic technologies use widely accepted RDF. Knowledge graphs are specific graphs containing the nodes and edges containing the useful information for a particular domain. Subject matter experts can be consulted with while building the knowledge graph in order for it to contain all the domain information it needs for answering the queries modeling knowledge domains can be considered as the core for semantic technologies, knowledge models defines entities, relations, attributes and values. Semantic technologies can also be integrated with machine learning models for in order to increase the effectiveness of the search engine. RDF or any other data model can be processed in a way of how human thinking works making it effective as it contains triples, i.e., subject, predicate, object which can be just natural language statements or very close to natural language phase.
- Research Article
1
- 10.1515/jls-2012-0003
- Mar 28, 2012
- jlse
This paper engages with intertextuality and proposes a cognitively-informed approach to it based on the notion of semantic intertextual frames, an online processing domain which regulates the construction of word-level intertextual links. The construction of these frames is discussed in terms of Evans' LCCM Theory (Theory of Lexical Concepts and Cognitive Models), and the distinction he draws between lexical concepts and cognitive models. Lexical concepts afford access to cognitive models via direct or indirect access routes. Word-level intertextual connections are explained based on the identification of the same lexical item, a cognitive synonym or a hyponym. In the first two cases a direct access route to the cognitive model is afforded by the lexical concepts, while in the latter this route is indirect. A number of examples drawn from literary texts are used to illustrate the model.
- Research Article
- 10.1215/00318108-2895439
- Jul 1, 2015
- Philosophical Review
<i>Pursuing Meaning</i>
- Conference Article
7
- 10.1145/320599.320672
- Jan 1, 1985
Controlled natural language interfaces (extended abstract)
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.