Abstract

Orthographic visual perception (reading) is encoded via a widespread dynamic interaction between different language centers of the brain and visual cortex. In this study, we investigated orthographic visual perception decoding with Magnetoencephalography (MEG), where phrases were visually presented to participants. We compared the decoding performance obtained with sensors within the occipital lobe that obtained with sensors covering the whole head. Two naive machine learning classifiers namely support vector machines (SVM) and linear discriminant analysis (LDA) were used. Experimental results indicated that the decoding performance using only occipital sensors is similar to the performance obtained with all sensors within the task period, which were all above chance level. In addition, temporal analysis by taking short-time windows showed that the occipital sensors were more discriminative near onset compared to later time periods, while using the whole head sensor setup at later time periods performed slightly better than occipital sensors. This finding may indicate a sequential order (from visual cortex to other areas beyond occipital lobe) during visual speech perception.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.