Abstract

Recent developments in generative models have demonstrated that with the right data set, techniques, computational infrastructure, and network architectures, it is possible to generate seemingly intelligent outputs, without explicitly reckoning with underlying cognitive processes. The ability to generate novel, plausible behaviour could be a boon to cognitive modellers. However, insights for cognition are limited, given that generative models' blackbox nature does not provide readily interpretable hypotheses about underlying cognitive mechanisms. On the other hand, cognitive architectures make very strong hypotheses about the nature of cognition, explicitly describing the subjects and processes of reasoning. Unfortunately, the formal framings of cognitive architectures can make it difficult to generate novel or creative outputs. We propose to show that cognitive architectures that rely on certain Vector Symbolic Algebras (VSAs) are, in fact, naturally understood as generative models. We discuss how memories of VSA representations of data form distributions, which are necessary for constructing distributions used in generative models. Finally, we discuss the strengths, challenges, and future directions for this line of work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call