Abstract
We propose a mechanistic explanation of how working memories are built and reconstructed from the latent representations of visual knowledge. The proposed model features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that links latent space activities to tokenized representations. The simulation results revealed that new pictures of familiar types of items can be encoded and retrieved efficiently from higher levels of the visual hierarchy, whereas truly novel patterns are better stored using only early layers. Moreover, a given stimulus in working memory can have multiple codes, which allows representation of visual detail in addition to categorical information. Finally, we validated our model's assumptions by testing a series of predictions against behavioural results obtained from working memory tasks. The model provides a demonstration of how visual knowledge yields compact visual representation for efficient memory encoding.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.