Abstract

ABSTRACT A large body of literature supports theories positing a distributed, perceptually grounded semantic memory system. Prominent models have assumed distributed features are integrated into networks using either shallow or deep hierarchies. Previous behavioural tests of modality effects in shallow and deep hierarchies inspired by, but not implemented in, connectionist models support deep hierarchy architectures. We behaviourally replicate and model speeded dual feature verification in a sample of general-purpose modality-specific computational models of semantic memory trained on feature production norms for 541 concepts. The cross-modal advantage in semantic processing shown behaviourally and in simulations supports hierarchically organised distributed models of semantic memory and provides novel insight into the division of labour in these models. Analyses of the emergent model structure suggest animacy distinctions arise from the self-organisation of statistical co-occurrences among multisensory features but weakly among unisensory features. These findings suggest a privileged role of the multisensory convergence area for category representation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call