Abstract

Research in Artificial Intelligence (AI) has focused mostly on two extremes: either on small improvements in narrow AI domains, or on universal theoretical frameworks which are often uncomputable, or lack practical implementations. In this paper we attempt to follow a big picture view while also providing a particular theory and its implementation to present a novel, purposely simple, and interpretable hierarchical architecture. This architecture incorporates the unsupervised learning of a model of the environment, learning the influence of one’s own actions, model-based reinforcement learning, hierarchical planning, and symbolic/sub-symbolic integration in general. The learned model is stored in the form of hierarchical representations which are increasingly more abstract, but can retain details when needed. We demonstrate the universality of the architecture by testing it on a series of diverse environments ranging from audio/visual compression to discrete and continuous action spaces, to learning disentangled representations.

Highlights

  • The amount of context information added into the output should be weighted by the certainty about this context. We address this by converting the continuous observations into a discrete hidden state, which is converted again into a continuous representation on the output where the continuity captures the information obtained from the context inputs

  • All the clusters start to be used. This change improves the reconstruction error slightly and allows the Temporal Pooler to start learning. This is relevant to [78], where it is argued that the internal structure of the network changes, even if it might not be apparent from the output

  • The single Expert was able to learn to drive on a road in a so called puppetlearning setting, where the correct actions are shown

Read more

Summary

Motivation

Despite the fact that strong AI capable of handling a diverse set of human-level tasks was envisioned decades ago, and there has been significant progress in developing AI for narrow tasks, we are still far away from having a single system which would be able to learn with efficiency and generality comparable to human beings or animals. Another class of algorithm that can be mentioned encompasses systems that are usually somewhere on the edge of cognitive architectures and adaptive general problem-solving systems Examples of such systems are: the Non-Axiomatic Reasoning System [4], Growing Recursive Self-Improvers [5], recursive data compression architecture [6], OpenCog [7], Never-Ending Language Learning [8], Ikon Flux [9], MicroPsi [10], Lida [11] and many others [12]. This section describes the basic requirements of an autonomous agent situated in a realistic environment, and discusses how they are addressed by current Deep Learning frameworks

Learning
Situated cognition
Reasoning
Biological inspiration
Design requirements on the architecture
Experiments
Discussion
Discussion and conclusions
Findings
Limitations and future work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call