Abstract

Artificial intelligence has made tremendous advances since its inception about seventy years ago. Self-driving cars, programs beating experts at complex games, and smart robots capable of assisting people that need care are just some among the successful examples of machine intelligence. This kind of progress might entice us to envision a society populated by autonomous robots capable of performing the same tasks humans do in the near future. This prospect seems limited only by the power and complexity of current computational devices, which is improving fast. However, there are several significant obstacles on this path. General intelligence involves situational reasoning, taking perspectives, choosing goals, and an ability to deal with ambiguous information. We observe that all of these characteristics are connected to the ability of identifying and exploiting new affordances—opportunities (or impediments) on the path of an agent to achieve its goals. A general example of an affordance is the use of an object in the hands of an agent. We show that it is impossible to predefine a list of such uses. Therefore, they cannot be treated algorithmically. This means that “AI agents” and organisms differ in their ability to leverage new affordances. Only organisms can do this. This implies that true AGI is not achievable in the current algorithmic frame of AI research. It also has important consequences for the theory of evolution. We argue that organismic agency is strictly required for truly open-ended evolution through radical emergence. We discuss the diverse ramifications of this argument, not only in AI research and evolution, but also for the philosophy of science.

Highlights

  • Since the founding Dartmouth Summer Research Project in 1956 (McCarthy et al, 1955), the field of artificial intelligence (AI) has attained many impressive achievements

  • We show that the term “agency” refers to radically different notions in organismic biology and AI research

  • We have argued two main points: (1) Artificial General Intelligence (AGI) is impossible in the current algorithmic frame of research in AI and robotics, since algorithms cannot identify and exploit new affordances

Read more

Summary

INTRODUCTION

Since the founding Dartmouth Summer Research Project in 1956 (McCarthy et al, 1955), the field of artificial intelligence (AI) has attained many impressive achievements. If one considers the human brain as a computer—and by this we mean some sort of computational device equivalent to a universal Turing machine— the achievement of AGI might rely on reaching a sufficient level of intricacy through the combination of different task-solving capabilities in AI systems. This seems eminently feasible—a mere extrapolation of current approaches in the context of rapidly increasing computing power—even though it requires the combinatorial complexification of the AI algorithms themselves, and of the methods used to train them.

OBSTACLES TOWARD ARTIFICIAL GENERAL INTELLIGENCE
BIO-AGENCY
THE KEY ROLE OF AFFORDANCES
THE BOUNDED RATIONALITY OF ALGORITHMS
IMPLICATIONS FOR ROBOTS
POSSIBLE OBJECTIONS
OPEN-ENDED EVOLUTION IN COMPUTER SIMULATIONS
CONCLUSION
DATA AVAILABILITY STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call