Abstract

Cognitive models in psychology and neuroscience widely assume that the human brain maintains an abstract representation of tasks. This assumption is fundamental to theories explaining how we learn quickly, think creatively, and act flexibly. However, neural evidence for a verifiably generative abstract task representation has been lacking. Here, we report an experimental paradigm that requires forming such a representation to act adaptively in novel conditions without feedback. Using functional magnetic resonance imaging, we observed that abstract task structure was represented within left mid-lateral prefrontal cortex, bilateral precuneus, and inferior parietal cortex. These results provide support for the neural instantiation of the long-supposed abstract task representation in a setting where we can verify its influence. Such a representation can afford massive expansions of behavioral flexibility without additional experience, a vital characteristic of human cognition.

Highlights

  • Many complex tasks we perform daily, though different in their details, share an abstract structure

  • In session 1, participants completed a behavioral version of the task where performance in the generalization phase determined their inclusion in the fMRI experiment in sessions 2 and 3. 48% of participants passed a criterion of ≥70% accuracy in all 18 generalization conditions and so were recruited for two fMRI sessions

  • Participants who failed to meet this criterion performed the same task in two additional behavioral sessions rather than in the scanner

Read more

Summary

Introduction

Many complex tasks we perform daily, though different in their details, share an abstract structure. Latent states have been defined as distributions over the spatio-temporal occurrence of rewards or punishments (Gershman et al, 2013; Nassar et al, 2019), task stimuli or stimulus features (Collins and Frank, 2013; Gershman and Niv, 2013; Tomov et al., 2018), or conditionalization of action-values based on recent task history (Schuck et al., 2016; Zhou et al, 2019). From this distribution over task features, an agent can infer which conditions might belong to the same latent states and which do not. This information can be used to segregate or lump together observations, making learning more efficient, and enabling generalization of learning between settings that share the same latent states (Gershman and Niv, 2010; Niv, 2019)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call