Abstract

Lexical co-occurrence models of semantic memory represent word meaning by vectors in a high-dimensional space. These vectors are derived from word usage, as found in a large corpus of written text. Typically, these models are fully automated, an advantage over models that represent semantics that are based on human judgments (e.g., feature-based models). A common criticism of co-occurrence models is that the representations are not grounded: Concepts exist only relative to each other in the space produced by the model. It has been claimed that feature-based models offer an advantage in this regard. In this article, we take a step toward grounding a co-occurrence model. A feed-forward neural network is trained using back propagation to provide a mapping from co-occurrence vectors to feature norms collected from subjects. We show that this network is able to retrieve the features of a concept from its co-occurrence vector with high accuracy and is able to generalize this ability to produce an appropriate list of features from the co-occurrence vector of a novel concept.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call