Abstract

Despite the recent progress in AI powered by deep learning in solving narrow tasks, we are not close to human intelligence in its flexibility, versatility, and efficiency. Efficient learning and effective generalization come from inductive biases, and building Artificial General Intelligence (AGI) is an exercise in finding the right set of inductive biases that make fast learning possible while being general enough to be widely applicable in tasks that humans excel at. To make progress in AGI, we argue that we can look at the human brain for such inductive biases and principles of generalization. To that effect, we propose a strategy to gain insights from the brain by simultaneously looking at the world it acts upon and the computational framework to support efficient learning and generalization. We present a neuroscience-inspired generative model of vision as a case study for such approach and discuss some open problems about the path to AGI.

Highlights

  • Despite revolutionary progress in artificial intelligence in the last decade, human intelligence remains unsurpassed in its versatility, efficiency, and flexibility

  • Neuro and cognitive sciences produce a vast array of data every year. It is natural for a machine learning researcher to get intimidated by this complexity and conclude that nothing can be learned from the brain that is of value to artificial intelligence

  • Message-Passing Inspired by Cortical Dynamics Recursive Cortical Network (RCN) was instantiated as a probabilistic graphical model (PGM) (Pearl, 1988)

Read more

Summary

INTRODUCTION

Despite revolutionary progress in artificial intelligence in the last decade, human intelligence remains unsurpassed in its versatility, efficiency, and flexibility. Even extremely simple tasks require orders of magnitude more data to train, and the performance of the trained systems remains way too brittle (Lake et al, 2016; Kansky et al, 2017; Marcus, 2018; Smith, 2019) For these reasons, today’s AI systems are considered to be narrow, while human intelligence is considered to be general. Even for mammalian brains, there is a bewildering array of experimental findings in neuroscience, scaling several levels of investigation from single neuron physiology to microcircuits of several hundred cells to psychophysical correlates of intelligence spanning several brain areas It is not clear which of these insights are relevant for machine learning and artificial intelligence because some of the observations might relate to the implementation substrate, or arbitrary constraints on the amount of hardware. We discuss a few open questions regarding general intelligence before offering closing thoughts

Direct Fit on Isolated Tasks Does Not Produce General Intelligence
Common Sense Is the Holy Grail
THE TRIANGULATION STRATEGY FOR LEARNING LESSONS FROM THE BRAIN
What Kind of Visual Generative Model Is Suitable for Common Sense?
Shape Bias and Factorized Representation of Contours and Surfaces
Lateral Connections for Contour Continuity
Hierarchy
Bringing It All Together
DISCUSSION
Don’t We Need a Precise Mathematical Definition of AGI to Build One?
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.