Abstract

Human language is a versatile tool for communicating mental models between speakers of a language. This paper presents the TextMap system, a logic-based system for generating visuospatial representations from textual input. TextMap combines a minimalistic parser with a simple model counting mechanism to automatically extract coordinate information from propositional Horn-logic knowledge bases encoding spatial predications. The system is based on a biologically inspired low-level bit vector mechanism, the activation bit vector machine (ABVM). It does not require an ontology apart from a list of which tokens indicate relations. Its minimalism and simplicity make it a general purpose visualization or imagery tool. The paper describes the TextMap application architecture as well as the key algorithms forming the ABVM core. The system is evaluated with respect to a larger case study of a geographic layout of 13 cities demonstrating capabilities as well as current shortcomings for a complex scenario of a human mental map description.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call