Abstract

The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants, 15 females, 10 males). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.