Abstract

Large Language Models (LLMs) have shown impressive performance on a wide variety of tasks. However, apparent limitations hinder their performance, especially on tasks that require multiple steps of reasoning or compositionality. Arguably, the primary sources of these limitations are the decoding strategy and how the models are trained. We propose, and provide a general description of, an architecture that combines LLMs and cognitive architectures, called Language Model based Cognitive Architecture (LMCA), to overcome these limitations. We draw an analogy between this architecture and "fast" and "slow" thinking in human cognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call