Abstract

Semantic knowledge has been investigated using both online and offline methods. One common online method is category recall, in which members of a semantic category like “animals” are retrieved in a given period of time. The order, timing, and number of retrievals are used as assays of semantic memory processes. One common offline method is corpus analysis, in which the structure of semantic knowledge is extracted from texts using co-occurrence or encyclopedic methods. Online measures of semantic processing, as well as offline measures of semantic structure, have yielded data resembling inverse power law distributions. The aim of the present study is to investigate whether these patterns in data might be related. A semantic network model of animal knowledge is formulated on the basis of Wikipedia pages and their overlap in word probability distributions. The network is scale-free, in that node degree is related to node frequency as an inverse power law. A random walk over this network is shown to simulate a number of results from a category recall experiment, including power law-like distributions of inter-response intervals. Results are discussed in terms of theories of semantic structure and processing.

Highlights

  • Semantic knowledge is a core component of language processing and other advanced cognitive functions

  • The goal of our study was to test whether power law-like inter-response intervals (IRIs) distributions and other findings from semantic category recall experiments could be explained by search over a semantic scalefree network

  • For the scrambled control network, there was no reliable relationship between IRIs and path length, F(2, 57) = 1.17, p > 0.3. These results show that search dynamics in both the experiment and model reflected the structure of semantic space, as measured by distance between nodes in a semantic network

Read more

Summary

Introduction

Semantic knowledge is a core component of language processing and other advanced cognitive functions. One approach is to theorize semantics as a high-dimensional feature space, where individual words and concepts are points or regions in that space (Lund and Burgess, 1996). Another approach is to theorize semantics as a network with nodes representing words and concepts, and connections among nodes representing semantic relations and associations (Collins and Loftus, 1975). Any such memory structure—whether a feature space, network, or something else—must be learned, accessed, and maintained over time. There are numerous theories on how these memories are learned and accessed (see Rogers and McClelland, 2004), but here we focus on recalling items from semantic categories, and how recall relates to the organization of semantic memory

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call