Abstract

Event Abstract Back to Event Representations in a recurrent network model of motor sequence learning reveal unified view of procedural memory consolidation and structure learning Quan Wang1*, Constantin A. Rothkopf1 and Jochen Triesch1 1 Johann Wolfgang Goethe University, Frankfurt Institute for Advanced Studies (FIAS), Germany Humans can improve their performance in procedural movement tasks through practice, but such motor learning has shown puzzling and seemingly contradictory results. On the one hand, a wide variety of proactive and retroactive interference effects have been observed when multiple tasks have to be learned. On the other hand, some studies have reported facilitation and transfer of learning between different tasks, sometimes based on abstract structure similarities. Here we show how these different phenomena can all be understood based on generic learning principles in a recurrent neural network model. Specifically, we consider a self-organizing recurrent neural network model whose activity and connectivity is shaped by three plasticity mechanisms: spike timing-dependent plasticity (STDP), intrinsic plasticity, and synaptic scaling [1]. The network receives stimulus-specific input and is connected to a layer of motor neurons mediating the movement sequences through a winner-take-all mechanism. We use this network to model a series of experiments on movement sequence learning using a single set of parameters in all simulations. The network learns to carry out the correct movement sequences over trials and reproduces differences in behavior between training schedules such as blocked vs. random training. The network also shows striking similarity to human performance in tasks with similar training sequences but different training times. Previously we have shown the agreement between the output of the network and psychophysical performance across several tasks and training schedules with a single set of parameters for the recurrent network. The current work presents a detailed analysis of the underlying changes in the neuronal representations of the motor sequences across learning. Mutual information, PCA analysis of network activity, and measures of neuronal selectivity for parts of the motor sequences reveal how input representations and the trajectories of neural activity change with training. Finally, we provide testable experimental predictions. Thus, we show how training schedule and task similarity interact to produce a rich set of interference and facilitation effects thereby unifying procedural memory consolidation and structure learning in a recurrent network model with multiple plasticity mechanisms. Acknowledgements First authorship is shared between QW and CR. This research was funded in part by the BMBF through the Bernstein Focus: Neurotechnology in Frankfurt and by the European Union through FP7 project IM-CLeVeR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call