A striking feature of human cognition is an exceptional ability to rapidly adapt to novel situations. It is proposed this relies on abstracting and generalizing past experiences. While previous research has explored how humans detect and generalize single sequential processes, we have a limited understanding of how humans adapt to more naturalistic scenarios, for example, complex, multisubprocess environments. Here, we propose a candidate computational mechanism that posits compositional generalization of knowledge about subprocess dynamics. In two samples (N = 238 and N = 137), we combined a novel sequence learning task and computational modeling to ask whether humans extract and generalize subprocesses compositionally to solve new problems. In prior learning, participants experienced sequences of compound images formed from two graphs' product spaces (group 1: G1 and G2, group 2: G3 and G4). In transfer learning, both groups encountered compound images from the product of G1 and G3, composed entirely of new images. We show that subprocess knowledge transferred between task phases, such that in a new task environment each group had enhanced accuracy in predicting subprocess dynamics they had experienced during prior learning. Computational models utilizing predictive representations, based solely on the temporal contiguity of experienced task states, without an ability to transfer knowledge, failed to explain these data. Instead, behavior was consistent with a predictive representation model that maps task states between prior and transfer learning. These results help advance a mechanistic understanding of how humans discover and abstract subprocesses composing their experiences and compositionally reuse prior knowledge as a scaffolding for new experiences.
Read full abstract