To efficiently yet reliably represent and process information, our brains need to produce information-rich signals that differentiate between moments or cognitive states, while also being robust to noise or corruption. For many, though not all, natural systems, these two properties are often inversely related: More information-rich signals are less robust, and vice versa. Here, we examined how these properties change with ongoing cognitive demands. To this end, we applied dimensionality reduction algorithms and pattern classifiers to functional neuroimaging data collected as participants listened to a story, temporally scrambled versions of the story, or underwent a resting state scanning session. We considered two primary aspects of the neural data recorded in these different experimental conditions. First, we treated the maximum achievable decoding accuracy across participants as an indicator of the "informativeness" of the recorded patterns. Second, we treated the number of features (components) required to achieve a threshold decoding accuracy as a proxy for the "compressibility" of the neural patterns (where fewer components indicate greater compression). Overall, we found that the peak decoding accuracy (achievable without restricting the numbers of features) was highest in the intact (unscrambled) story listening condition. However, the number of features required to achieve comparable classification accuracy was also lowest in the intact story listening condition. Taken together, our work suggests that our brain networks flexibly reconfigure according to ongoing task demands and that the activity patterns associated with higher-order cognition and high engagement are both more informative and more compressible than the activity patterns associated with lower-order tasks and lower engagement.
Read full abstract