Abstract

Recent advances in deep learning have allowed artificial intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic, and cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace Theory (GWT) refers to a large-scale system integrating and distributing information among networks of specialized modules to create higher-level forms of cognition and awareness. We argue that the time is ripe to consider explicit implementations of this theory using deep-learning techniques. We propose a roadmap based on unsupervised neural translation between multiple latent spaces (neural networks trained for distinct tasks, on distinct sensory inputs and/or modalities) to create a unique, amodal Global Latent Workspace (GLW). Potential functional advantages of GLW are reviewed, along with neuroscientific implications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call