Abstract

Human language is capable of communicating mental models between speakers of a language. The question why and how this works is closely tied to a specific variant of the symbol grounding problem that still leaves many open questions. This paper presents the core mechanism of the TextMap system, a logic-based system for generating visuospatial representations from textual input. The system leverages a recent discovery linking logical truth tables of formulae to images: a simple model counting mechanism that automatically extracts coordinate information from propositional Horn-logic knowledge bases encoding spatial predications. The system is based on a biologically inspired low-level bit vector mechanism, the activation bit vector machine (ABVM). It does not require an ontology apart from a list of which tokens indicate relations. Its minimalism and simplicity make TextMap a general purpose visualization or imagery tool. This paper demonstrates the core model counting mechanism and the results of a larger case study of a geographic layout of 13 cities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call