Abstract
Large Language Models (LLMs) have shown impressive performance on a wide variety of tasks. However, apparent limitations hinder their performance, especially on tasks that require multiple steps of reasoning or compositionality. Arguably, the primary sources of these limitations are the decoding strategy and how the models are trained. We propose, and provide a general description of, an architecture that combines LLMs and cognitive architectures, called Language Model based Cognitive Architecture (LMCA), to overcome these limitations. We draw an analogy between this architecture and "fast" and "slow" thinking in human cognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.