Abstract

OBJECTIVES/GOALS: Speech production requires mapping between sound-based and motor-based neural representations of a word – accomplished by learning internal models. However, the neural bases of these internal models remain unclear. The aim of this study is to provide experimental evidence for these internal models in the brain during speech production. METHODS/STUDY POPULATION: 16 healthy human adults were recruited for this electrooculography speech study. 20 English pseudowords were designed to vary on confusability along specific features of articulation (place vs manner). All words were controlled for length and voicing. Three task conditions were performed: speech perception, covert and overt speech production. EEG was recorded using a 64-channel Biosemi ActiveTwo system. EMG was recorded on the orbicularis orbis inferior and neck strap muscles. Overt productions were recorded with a high-quality microphone to determine overt production onset. EMG during was used to determine covert production onset. Neuroimaging: Representational Similarity Analysis (RSA), was used to probe the sound- and motor-based neural representations over sensors and time for each task. RESULTS/ANTICIPATED RESULTS: Production (motor) and perception (sound) neural representations were calculated using a cross-validated squared Euclidean distance metric. The RSA results in the speech perception task show a strong selectivity around 150ms, which is compatible with recent human electrocorticography findings in human superior temporal gyrus. Parietal sensors showed a large difference for motor-based neural representations, indicating a strong encoding for production related processes, as hypothesized by previous studies on the ventral and dorsal stream model of language. Temporal sensors, however, showed a large change for both motor- and sound-based neural representations. This is a surprising result since temporal regions are believed to be primarily engaged in perception (sound-based) processes. DISCUSSION/SIGNIFICANCE: This study used neuroimaging (EEG) and advanced multivariate pattern analysis (RSA) to test models of production (motor-) and perception (sound-) based neural representations in three different speech task conditions. These results show strong feasibility of this approach to map how the perception and production processes interact in the brain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call