Abstract

We propose a mechanistic explanation of how working memories are built and reconstructed from the latent representations of visual knowledge. The proposed model features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that links latent space activities to tokenized representations. The simulation results revealed that new pictures of familiar types of items can be encoded and retrieved efficiently from higher levels of the visual hierarchy, whereas truly novel patterns are better stored using only early layers. Moreover, a given stimulus in working memory can have multiple codes, which allows representation of visual detail in addition to categorical information. Finally, we validated our model's assumptions by testing a series of predictions against behavioural results obtained from working memory tasks. The model provides a demonstration of how visual knowledge yields compact visual representation for efficient memory encoding.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call