Abstract

The goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is recognized as generalization, the possibility to go over from specific to more general types of problems. A third dimension is semantic grounding. Our overall analysis connects to a number of known foundational issues in the philosophy of mind and cognition: the blockhead objection, the Turing test, the symbol grounding problem, the Chinese room argument, and use theories of meaning. It shall finally be argued that the dimension of grounding decomposes into three sub-dimensions. And the dimension of self-learning turns out as only one of a whole range of “self-x-capacities” (based on ideas of organic computing) that span the self-x-subspace of the full AI state space.

Highlights

  • There is much to suggest that 15 March 2016 should be regarded as a historical date

  • Our analysis has led to a 10-dimensional AI state space that can be compactified by a three-dimensional model consisting of:

  • Our analysis has led to a 10-dimensional space that can be compactified by a three-dimensional model consisting of self-x-capacity, grounding and generalization

Read more

Summary

Introduction

There is much to suggest that 15 March 2016 should be regarded as a historical date. On this day Lee Sedol, one of the strongest Go players in the world, lost the last game of a tournament lasting several days against the “AlphaGo” AI system of the development company Google DeepMind. The deep learning revolution has led to a new hype in AI over the recent 10 years, be it in science, industry, economy or the media These developments provide a strong motivation to rethink the question of what constrains the evolution of AI understood as the general quest to develop thinking machines or artificial minds. It shall be argued that grounding decomposes into three sub-dimensions, but that self-learning is only one of a whole range of here so called self-x-capacities They span the selfx-subspace of the AI state space, which, according to our analysis, turns out to be six-dimensional. A 10-dimensional state space of AI with main dimensions generalization, grounding and self-x-capacity will be defended

Self‐Learning as a Dimension of the Space of AI
The Notion of Self‐Learning
Demis Hassabis
The Dimension of Self‐Learning
AGI and the Generalization Dimension
Turing Test and Generalization
Semantic Grounding as an AI Space Dimension
The Symbol Grounding Problem and the Dimension of Functional Role Grounding
The Chinese Room and the Dimension of Causal Grounding
Meaning as Use and the Dimension of Social Grounding
A Simplified Model Space
The Self‐x‐Capacity Subspace
The Grounding Subspace
Missing Dimensions?
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call