La factibilidad del niño-máquina de Turing
According to the philosophers of Artificial Intelligence (AI), Turing Machines and theImitation Game are the most important concepts proposed by Alan Turing. The Child-Machine Project, which projects learning machines via digital computers, is less known, although it is noless important. According to Turing’s project, a programmed machine needs to be a Child-Machineto turn into an adult mind, one that understands, judges, and distinguishes. In this article, I argue that Turing’s desideratum is not realizable only with algorithms. In the first section, I introduce theproblem, while in the second I briefly analyze concepts such as algorithms, Turing Machines, and their relation. In the third section, I deal with Machine Intelligence and the Child-Machine Project.In the fourth section, I look at a form of understanding, which is the basis of the Chinese Room Argument: introspection and reflective thinking, two factors that enable the process by whichresults are revised. In the fifth section, I analyze why those processes of revision are the stumbling block of classical AI or GOFAI; as I argue, introspection and reflective thinking are the cognitivefaculties that prevent the child-machine from becoming a “thinking adult mind”.
- Conference Article
- 10.14236/ewic/tur2004.0
- Jan 1, 2004
Alan Turing is well known for his Turing test in artificial intelligence, but the full range of his contributions in a wide variety of disparate disciplines is perhaps not so well appreciated. Hence the impetus for this conference - the only one in the UK in 2004 to mark the fiftieth anniversary of Turing's death - to attempt to provide an overview that encompassed this remarkable man's pioneering work in several diverse fields. (Proceedings of a conference held at Manchester University on June 5th 2004).
- Research Article
16
- 10.1162/daed_e_01897
- May 1, 2022
- Daedalus
Getting AI Right: Introductory Notes on AI & Society
- Research Article
4
- 10.1145/255315.255364
- May 12, 1985
- ACM SIGAPL APL Quote Quad
XPL
- Research Article
2
- 10.55041/ijsrem27796
- Dec 30, 2023
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Artificial intelligence is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves. AI is not a new for the scientist, it was introduce in 1943 with artificial neurons model and get popular in 1950 due to “Turting test” the test was done to get answer that machine can think?, purposed by Alan Turing. Basically AI is categorized into three types, Artificial Narrow Intelligence, Artificial General Intelligence and Artificial Super Intelligence. Deep learning and machine learning is major subfield of Artificial intelligence. DL as a subset of ML, which is also another subset of AI. Therefore, AI is the all-encompassing concept that initially erupted. Application of AI fields are Healthcare, Business, Education, Agriculture, Finance, Law, Entertainment and media, Software coding and IT processes, Security, Manufacturing, Banking and Transportation. In reference to Job creation or distraction, Artificial Intelligence is not job killer but a job category killer”. In the latest report (May 2023) on The Future of Jobs, the World Economic Forum (WEF) predicts the creation of 69 million jobs by 2027 thanks to AI, but also the destruction of 89 million jobs. In context to intelligence level, Jan. 2022, Age of IQ level of AI was as 7 year Children and in Dec. 2022, Age of IQ level of AI was as 9 year Children. India is emerging market in global and it has around 12 % of work could be automated by AI. In India more than 2000 startup are related to AI and 90000 plus AI Professional work in India.The economic impact of AI, for select G20 countries and estimates AI to boost India’s annual growth rate by 1.3 percentage points by 2035. AI has potential to add 1 trillion to India’s economy in 2035. We are going to enter into new technological world, it’s may be our fortune or misfortune. Key Words: - Artificial Intelligence, Machine learning, Job, India
- Research Article
1
- 10.5204/mcj.148
- Jul 15, 2009
- M/C Journal
The Re-Wiring of History
- Book Chapter
- 10.1017/cbo9781107297234.004
- Sep 22, 2016
At this stage in the book we take a break from looking at Alan Turing himself and the imitation game and consider the wider field of artificial intelligence (AI). Whilst the game itself has proved to be arguably one of the most iconic and controversial aspects of AI, it is useful, we feel, to assess just how the game fits into the field and perhaps to give some sort of understanding as to why it is so important. We also take a look at such things as natural language processing but we avoid heavy mathematics. Anyone who is already well versed in AI may well wish to move straight to Chapter 4. Alan Turing is frequently referred to as the father of artificial intelligence. He was around at the dawn of the computer age and was himself directly involved in early computer systems such as the Bombe, which he designed, and the Colossus, on which his work was used. The field of AI itself however was, some claim, first so named after Turing's death, around 1956 (Russell and Norvig, 2012) although in general it could be said to have come into existence as the first computers appeared in the 1940s and 1950s. In AI's formative years attention was focussed mostly on getting computers to do things that, if done by a human, would be regarded as intelligent acts. Essentially it was very human-centered. When Turing proposed his imitation game in 1950, it was perfectly timed to be grabbed hungrily by the young and burgeoning, soon to become, AI community, particularly those interested in the philosophical aspects of the new field. As was shown in the previous chapter even main stream radio broadcasting was not scared to encompass the topic. The game and AI Turing wanted to come up with a realisable concept of intelligence in machines. Rather than give a long list of definitions, many of which would be controversial, or to construct a series of mathematical statements, most of which would be impracticable, he put the human at the centre and used a form of science involving actual experimentation to confirm the hypothesis.
- Book Chapter
47
- 10.4324/9781003074991-37
- Mar 20, 2008
an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood. The question is nonetheless of some interest. Cognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer. Artificial intelligence (AI) research attempts to develop such models. But even as cognitive science has displaced behavioralism as the dominant paradigm for investigating the human mind, fundamental questions about the very possibility of artificial intelligence continue to be debated. This Essay explores those questions through a series of thought experiments that transform the theoretical question whether artificial intelligence is possible into legal questions such as, Could an artificial intelligence serve as a trustee? What is the relevance of these legal thought experiments for the debate over the possibility of artificial intelligence? A preliminary answer to this question has two parts. First, putting the AI debate in a concrete legal context acts as a pragmatic Occam's razor. By reexamining positions taken in cognitive science or the philosophy of artificial intelligence as legal arguments, we are forced to see them anew in a relentlessly pragmatic context. Philosophical claims that no program running on a digital computer could really be intelligent are put into a context that requires us to take a hard look at just what practical importance the missing reality could have for the way we speak and conduct our affairs. In other words, the legal context provides a way to ask for the cash value of the arguments. The hypothesis developed in this Essay is that only some of the claims made in the debate over the possibility of AI do make a pragmatic difference, and it is pragmatic differences that ought to be decisive. Second, and more controversially, we can view the legal system as a repository of knowledge-a formal accumulation of practical judgments. The law embodies core insights about the way the world works and how we evaluate it. Moreover, in common-law systems judges strive to decide particular cases in a way that best fits the legal landscape-the prior cases, the statutory law, and the constitution. Hence, transforming the abstract debate over the possibility of AI into an imagined hard case forces us to check our intuitions and arguments against the assumptions that underlie social decisions made in many other contexts. By using a thought experiment that explicitly focuses on wide coherence, we increase the chance that the positions we eventually adopt will be in reflective equilibrium with our views about related matters. In addition, the law embodies practical knowledge in a form that is subject to public examination and discussion. Legal materials are published and subject to widespread public scrutiny and discussion. Some of the insights gleaned in the law may clarify our approach to the artificial intelligence debate.
- Front Matter
- 10.1016/b978-0-12-386980-7.50034-4
- Jan 1, 2013
Front Matter
- Book Chapter
9
- 10.1007/978-1-4842-3207-1_1
- Dec 22, 2017
The idea of making intelligent, sentient, and self-aware machines is not something that suddenly came into existence in the last few years. In fact a lot of lore from Greek mythology talks about intelligent machines and inventions having self-awareness and intelligence of their own. The origins and the evolution of the computer have been really revolutionary over a period of several centuries, starting from the basic Abacus and its descendant the slide rule in the 17th Century to the first general purpose computer designed by Charles Babbage in the 1800s. In fact, once computers started evolving with the invention of the Analytical Engine by Babbage and the first computer program, which was written by Ada Lovelace in 1842, people started wondering and contemplating that could there be a time when computers or machines truly become intelligent and start thinking for themselves. In fact, the renowned computer scientist, Alan Turing, was highly influential in the development of theoretical computer science, algorithms, and formal language and addressed concepts like artificial intelligence and Machine Learning as early as the 1950s. This brief insight into the evolution of making machines learn is just to give you an idea of something that has been out there since centuries but has recently started gaining a lot of attention and focus.
- Research Article
42
- 10.1111/anae.12361
- Sep 12, 2013
- Anaesthesia
Isolated forearm – or isolated brain? Interpreting responses during anaesthesia – or ‘dysanaesthesia’
- Research Article
1
- 10.1098/rsta.2012.0221
- Jul 28, 2012
- Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
The foundations of computation, physics and mentality: the Turing legacy
- Book Chapter
- 10.1093/oso/9780192840554.003.0019
- Feb 23, 2006
The first inkling I had of the work done at Bletchley Park during the Second World War on electronic codebreaking machines resulted from my efforts to find out what Alan Turing had done during the war. I had been assembling a set of original documents and papers for reproduction in a book on the origins of digital computers, when a colleague questioned the fact that Turing did not figure in the book. At this stage I knew only of Turing’s pre-war work on what we now term ‘Turing machines’, which was purely theoretical, and of his post-war work at the National Physical Laboratory, which did not lead to a working computer in the pre-1950 period on which I was concentrating (see Chapter 9). I responded to the implied challenge and gradually tracked down various brief published allusions to wartime work by Turing and others at Bletchley Park (in particular an article by Jack Good), which were then assembled into a draft article. This draft persuaded various people, especially Donald Michie and Jack Good—both of whom worked with Turing at Bletchley Park—to provide additional, although very guarded, information. I decided to try to get the British wartime work on electronic computers declassified. I wrote directly to the Prime Minister at the time, Mr Edward Heath. The reply I received, signed by the Prime Minister himself, although it politely refused my request, nevertheless constituted for several years what I think was the only unclassified official document admitting that there had been a wartime electronic computer project in Britain. The result of this investigation was my ‘On Alan Turing and the Origins of Digital Computers’, which I presented at Michie’s annual machine intelligence workshop at Edinburgh in October 1972. The proceedings of the workshop were due to be published by the University of Edinburgh Press, and after I had given my presentation I overheard two people connected with the University Press voicing concern over whether they dare include it in the book. The conversation ended with them agreeing that it would be all right to go ahead since, if there were any repercussions, it would be the head of the University Press, namely Prince Philip, the Duke of Edinburgh, who would be held responsible.
- Book Chapter
3
- 10.1002/9781444367072.wbiee870
- Jun 15, 2020
Artificial intelligences are machines that can perform tasks that are characteristically thought of as requiring intelligence. This entry distinguishes between four different types of artificial intelligence (AI): domain‐specific AI, artificial general intelligence (AGI), sentient AI, and “superintelligence.” Existing AI, which is domain‐specific, raises concerns about algorithmic bias, privacy, surveillance, and social impacts, including the possibility of mass unemployment. It is also vital that decisions reached by AI are available for public scrutiny and justification; the question of when we might be justified in trusting decisions reached by AI remains open. Military uses of AI are especially controversial. Should AGI be realized, these issues will become even more urgent and will be exacerbated by the possibility that political and economic questions might be handed over to artificial general intelligences. The idea of machine sentience raises the problem of other minds in an especially stark form. Questions would also arise as to whether sentient machines would have moral status or be moral persons. Might they even acquire more moral standing than human beings? The suggestion that AI might lead to the emergence of superintelligences, which might pose a threat to the human species, highlights an issue that is central to the ethics of all these sorts of AI: who has the right to make decisions about technologies that have the potential to radically the world we all share?
- Book Chapter
1
- 10.1093/oso/9780198747826.003.0012
- Jan 26, 2017
I never met Alan Turing; he died five years before I was born. But somehow I feel I know him well, not least because many of my own intellectual interests have had an almost eerie parallel with his. And by a strange coincidence, the ‘birthday’ of Wolfram Mathematica, 23 June 1988, is aligned with Turing’s own. I think I first heard of Alan Turing when I was about 11 years old, right around the time I saw my first computer. Through a friend of my parents, I had got to know a rather eccentric old classics professor, who, knowing my interest in science, mentioned to me this ‘bright young chap named Turing’ whom he had known during the Second World War. One of this professor’s eccentricities was that, whenever the word ‘ultra’ came up in a Latin text, he would repeat it over and over again and make comments about remembering it. At the time, I didn’t think much of it, although I did remember it. Only years later did I realize that ‘Ultra’ was the codename for the British cryptanalysis effort at Bletchley Park during the war. In a very British way, the classics professor wanted to tell me something about it, without breaking any secrets—and presumably it was at Bletchley Park that he had met Alan Turing. A few years later I heard scattered mentions of Alan Turing in various British academic circles. I heard that he had done mysterious but important work in breaking German codes during the war, and I heard it claimed that after the war he had been killed by British Intelligence. At that time some of the British wartime cryptography effort was still secret, including Turing’s role in it. I wondered why. So I asked around, and started hearing that perhaps Turing had invented codes that were still being used. In reality, though, the continued secrecy seems to have been intended to prevent its being known that certain codes had been broken, so that other countries would continue to use them. I am not sure where I next encountered Alan Turing. Probably it was when I decided to learn all I could about computer science, and saw all sorts of mentions of ‘Turing machines’. But I have a distinct memory from around 1979 of going to the library and finding a little book about Alan Turing written by his mother, Sara Turing.
- Book Chapter
1
- 10.1093/oso/9780198747826.003.0009
- Jan 26, 2017
I had the good fortune to work closely with Alan Turing and to know him well for the last 12 years of his short life. It is a rare experience to meet an authentic genius. Those of us privileged to inhabit the world of scholarship are familiar with the intellectual stimulation furnished by talented colleagues. We can admire the ideas they share with us and are usually able to understand their source; we may even often believe that we ourselves could have created such concepts and originated such thoughts. However, the experience of sharing the intellectual life of a genius is entirely different; one realizes that one is in the presence of an intelligence, a sensitivity of such profundity and originality that one is filled with wonder and excitement. Alan Turing was such a genius, and those, like myself, who had the astonishing and unexpected opportunity created by the strange exigencies of the Second World War to be able to count Turing as colleague and friend will never forget that experience, nor can we ever lose its immense benefit to us. Before the war, in 1935–36, Turing had done fundamental work in mathematical logic and had invented a concept that has come to be known as the ‘universal Turing machine’ (see Chapter 6). His purpose was to make precise the notion of a computable mathematical function, but he had in fact provided a blueprint for the most basic principles of computer design and for the foundations of computer science. I joined the distinguished team of mathematicians and first-class chess players working on the Enigma code in January 1942. Alan Turing was the acknowledged leading light of that team. However, I must emphasize that we were a team—this was no one-man show! Indeed, Turing’s contribution was somewhat different from that of the rest of the team, being more concerned with improving our methods, especially the machines we used to help us, and less concerned with our daily output of deciphered messages. It was due to the efforts of Turing and the entire team that Churchill was able to describe our work as ‘my secret weapon’.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.