Brain decoders that reconstruct language from semantic representations have the potential to improve communication for people with impaired language production. However, training a semantic decoder for a participant currently requires many hours of brain responses to linguistic stimuli, and people with impaired language production often also have impaired language comprehension. In this study, we tested whether language can be decoded from a goal participant without using any linguistic training data from that participant. We trained semantic decoders on brain responses from separate reference participants and then used functional alignment to transfer the decoders to the goal participant. Cross-participant decoder predictions were semantically related to the stimulus words, even when functional alignment was performed using movies with no linguistic content. To assess how much semantic representations are shared between language and vision, we compared functional alignment accuracy using story and movie stimuli and found that performance was comparable in most cortical regions. Finally, we tested whether cross-participant decoders could be robust to lesions by excluding brain regions from the goal participant prior to functional alignment and found that cross-participant decoders do not depend on data from any single brain region. These results demonstrate that cross-participant decoding can reduce the amount of linguistic training data required from a goal participant and potentially enable language decoding from participants who struggle with both language production and language comprehension.
Read full abstract