Abstract

In a technical report submitted in 1948, Alan Turing presented a far-sighted survey of the prospect of constructing machines capable of intelligent behaviour. This report was all the more remarkable for having been written at a time when the first programmable digital computers were just beginning to be built, leaving Turing with only paper and pencil with which to explore his modern computational ideas. Turing may have been the first to suggest using randomly connected networks of neuron-like nodes to perform computation, and proposed the construction of large, brain-like networks of such neurons capable of being trained as one would teach a child. Some modern work has been performed on Turing’s neural networks, but all of it involves the invention of new structures outside the scope of Turing’s original specifications in order to solve a technical error on Turing’s part. I propose an alternative solution to Turing’s technical error, one which does not require the invention of new structures, and outline an approach which may allow the better exploration of the particular properties of Turing’s networks in their own right. I also give examples of ways in which to avoid viewing Turing’s early and strikingly original work through the lens of the conventions of modern neural network theory. Using a “genetical” search to configure such networks, as Turing suggested, is likely to yield non-intuitive or algorithmically opaque results, but this is likely to be a property of brain-like networks themselves, and a sign we are approaching Turing’s initial goal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call