Deriving representations of meaning, whether at the word, the sentence, or the discourse level, is a problem with a long history in cognitive psychology and psycholinguistics. In this article, we describe a computational model of high‐dimensional context space, the Hyperspace Analog to Language (HAL), and present simulation evidence that HAL's vector representations can provide sufficient information to make semantic, grammatical, and abstract distinctions. Human participants were able to use the context neighborhoods that HAL generates to match words with similar items and to derive the word (or a similar word) from the neighborhood, thus demonstrating the cognitive compatibility of the representations with human processing. An experiment exploring the meaning of agent‐ and patient‐oriented verbs provided the context for discussing how the connotative aspects of word neighborhoods could provide cues in establishing a discourse model. Using the vector representations to build a sentence‐level representation was attempted, but the results were not promising. This limitation of the representations is presented in the context of other similar models that have been more effective. A final set of experiments explored the nature of verbs and verb instruments, and we introduce a new methodology that extracts conceptual intersections from the high‐dimensional context space.
Read full abstract