Abstract

Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for the New York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all.

Highlights

  • Specialty section: This article was submitted to Cognitive Science, a section of the journal Frontiers in Psychology

  • The functionally of generative architectures moves beyond the simple function-approximation and discriminative-classification abilities of classical multi-layer perceptrons, at heart, in common with all neural networks that learn, and operate on, functions embedded in Euclidean space14, they remain subject to the constraints of Euclidean embeddings highlighted above

  • In analyzing what problems neural networks and machine learning can solve, Andrew Ng15 suggested that if a task only takes a few seconds of human judgment and, at its core, merely involves an association of A with B, it may well be ripe for imminent artificial intelligence (AI) automation

Read more

Summary

MAKING A MIND

For much of the twentieth century, the dominant cognitive paradigm identified the mind with the brain; as the Nobel laureate Francis Crick eloquently summarized:. As Lewis Carroll’s Alice might have phrased, ‘You’re nothing but a pack of neurons’ This hypothesis is so alien to the ideas of most people today that it can truly be called astonishing” (Crick, 1994). The fundamental question that Crick’s hypothesis raises is, that if we ever succeed in fully instantiating a sufficiently accurate simulation of the brain on a digital computer, will we have fully instantiated a digital [computational] mind, with all the human mind’s causal power of teleology, understanding, and reasoning, and will artificial intelligence (AI) have succeeded in delivering “Strong AI”1. I will offer a few “critical reflections” on one of the central, albeit awkward, questions of AI: why is it that, seven decades since Alan Turing first deployed an “effective method” to play chess in 1948, we have seen enormous strides in engineering particular machines to do clever things—from driving a car to beating the best at Go—but almost no progress in getting machines to genuinely understand; to seamlessly apply knowledge from one domain into another—the so-called problem of “Artificial General Intelligence” (AGI); the skills that both Hollywood and the wider media really think of, and depict, as AI?

NEURAL COMPUTING
EMBEDDINGS IN EUCLIDEAN SPACE
PROBLEM SOLVING USING ARTIFICIAL NEURAL NETWORKS
AI DOES NOT UNDERSTAND
Microsoft’s XiaoIce Chatbot
We Need to Talk About Tay
Causal Cognition and “Strong AI”
A “Mini” Turing Test
THE CHINESE ROOM
GÖDELIAN ARGUMENTS ON COMPUTATION AND UNDERSTANDING
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call