Abstract

Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

Highlights

  • Semantic space models (SSMs) have seen considerable recent attention in cognitive science both as automated tools to estimate semantic similarity between words and as psychological models of how humans learn and represent lexical semantics from contextual cooccurrences

  • It is perhaps not surprising that a TASA-trained random permutation model (RPM) would result in superior performance on Test of English as a Foreign Language (TOEFL), as RPM was designed with random permutation (RP) in mind from the start and optimized with respect to its performance on TOEFL [17]

  • While these results do not approach the high correlations on these tasks achieved by state-of-theart machine learning methods in computational linguistics, our results provide a rough approximation of the degree to which vector space architectures based on neurally plausible sequential encoding mechanisms can approximate the highdimensional similarity space encoded in human semantic memory

Read more

Summary

Introduction

Semantic space models (SSMs) have seen considerable recent attention in cognitive science both as automated tools to estimate semantic similarity between words and as psychological models of how humans learn and represent lexical semantics from contextual cooccurrences (for a review, see [1]). These models build abstract semantic representations for words from statistical redundancies observed in a large corpus of text (e.g., [2, 3]). The resulting space positions words proximally if they cooccur more frequently than would be expected by chance and Computational Intelligence and Neuroscience if they tend to occur in similar semantic contexts (even if they never directly cooccur)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call