Abstract

The cognitive-linguistic theory of conceptual blending was introduced by Fauconnier and Turner in the late 90s to provide a descriptive model and foundational approach for the (almost uniquely) human ability to invent new concepts. Whilst blending is often described as ‘fluid’ and ‘effortless’ when ascribed to humans, it becomes a highly complex, multi-paradigm problem in Artificial Intelligence. This paper aims at presenting a coherent computational narrative, focusing on how one may derive a formal reconstruction of conceptual blending from a deconstruction of the human ability of concept invention into some of its core components. It thus focuses on presenting the key facets that a computational framework for concept invention should possess. A central theme in our narrative is the notion of refinement, understood as ways of specialising or generalising concepts, an idea that can be seen as providing conceptual uniformity to a number of theoretical constructs as well as implementation efforts underlying computational versions of conceptual blending. Particular elements underlying our reconstruction effort include ontologies and ontology-based reasoning, image schema theory, spatio-temporal reasoning, abstract specification, social choice theory, and axiom pinpointing. We overview and analyse adopted solutions and then focus on open perspectives that address two core problems in computational approaches to conceptual blending: searching for the shared semantic structure between concepts—the so-called generic space in conceptual blending—and concept evaluation, i.e., to determine the value of newly found blends.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call