Abstract

In ‘Computing Machinery and Intelligence’, Turing, sceptical of the question ‘Can machines think?’, quickly replaces it with an experimentally verifiable test: the imitation game. I suggest that for such a move to be successful the test needs to be relevant, expansive, solvable by exemplars, unpredictable, and lead to actionable research. The Imitation Game is only partially successful in this regard and its reliance on language, whilst insightful for partially solving the problem, has put AI progress on the wrong foot, prescribing a top-down approach for building thinking machines. I argue that to fix shortcomings with modern AI systems a nonverbal operationalisation is required. This is provided by the recent Animal-AI Testbed, which translates animal cognition tests for AI and provides a bottom-up research pathway for building thinking machines that create predictive models of their environment from sensory input.

Highlights

  • In his 1950 paper ‘Computing Machinery and Intelligence’, Turing posed the question ‘Can machines think?’, promptly discarded it in favour of an experimentally verifiable question: ‘Can machines perform well at the imitation game?’, an open-ended verbal test of conversational ability (Turing 1950)

  • Crosby isolation from the low-level sensory pathways, predictive models, and interactive abilities they are scaffolded upon, at least conditionally, in our only current exemplars of thinking machines: biological organisms. Frameworks such as active inference suggest that these systems are instead built up from a drive to build predictive models and act in such a way as to best explain their sensory inputs (Friston et al 2011)

  • The test must be relevant, expansive, solvable by exemplars, and, especially when the test will become a target and not just, say, a physical measure, must be unpredictable and lead to actionable research. Under these metrics I show that the Imitation Game, whilst clever, is not a good operationalisation of the original question

Read more

Summary

Introduction

In his 1950 paper ‘Computing Machinery and Intelligence’, Turing posed the question ‘Can machines think?’, promptly discarded it in favour of an experimentally verifiable question: ‘Can machines perform well at the imitation game?’, an open-ended verbal test of conversational ability (Turing 1950). The test must be relevant, expansive, solvable by exemplars, and, especially when the test will become a target and not just, say, a physical measure, must be unpredictable and lead to actionable research Under these metrics I show that the Imitation Game, whilst clever, is not a good operationalisation of the original question. In this paper I introduce a different way to operationalise ‘can machines think?’ which ignores the verbal components of thought and instead focuses on physical intelligence This bottom-up operationalisation is currently implemented as the Animal-AI testbed (Crosby et al 2020), in a way which, I will argue, provides our best research path towards building thinking machines. I conclude that solving Animal-AI should be the step towards building machines that think

Operationalising ‘Can Machines Think?’
Comparative Cognition
From Comparative Cognition to AI
The Animal‐AI testbed
Current Progress
Objections
Conclusion
Findings
Compliance with ethical standards
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.