A rtificial G eneral I ntelligence and the F uture of the H uman R ace Bryon Pavlacka “Narrow AI is already used by the militaries of first world countries for war purposes.” of AGI research. Thinkers like Ray Kurzweil, Ben Goertzel, and Hugo De Garis think that we are entering into a world of extremely intelligent machines (Kurzweil 2005; Goertzel & Pennachin, 2007; De Garis, 2008). This article will discuss some of the ideas that researchers have on how AGI relates to the wellbeing of humans, including how the machines can help us and how they could potentially harm us. One scenario in which generally intelligent machines go bad and become a threat is if they end up in bad hands. Such machines, in the hands of small, politically motivated terrorist groups or large military organizations, could be used as weapons. AGI could offer such groups the ability to spy, gather, and synthesize information, as well as strategize attacks against the rest of the population. Developers of AGI will have little knowledge of whose hands their technology will end up in; they could unknowingly be constructing deadly weapons to be used against humanity. Of course, such threats are not imaginary future possibilities. Narrow AI is already used by the militaries of first world countries for war purposes. Consider drones such as the Northrop Grumman X-47B, an Unmanned Combat Aerial Vehicle that is being tested by the US Navy (DefenseTech.org, 2011). That’s right, there is no pilot. Of course, the drone can be given orders, but the exact way in which those orders are carried out will be left up to the drone’s Narrow AI system. Whether such systems will ever be extended toward general intelligence is currently unknown. However, the US military has shown interest in producing and controlling generally intelligent killing machines as well, as made evident by a paper called “Governing Lethal Behavior” by Ronald C. Arkin. The paper was commissioned by the U.S. Army Research Office and provides theory and formalisms for the implementation of a lethal AGI machine (Arkin, 2008). The paper describes a way in which a machine can be restricted to “ethical” behavior determined by the creator. The author optimistically hopes that his proposed formalisms may lead to generally intelligent battle drones that are more ethical in battle than humans are, yet the ability to define “ethical actions” remains the privilege of the machines’ engineers (Arkin, 2008, p.62). Due to the potential for AGI to be used as a weapon, the production of such machines carries many of the same moral ramifications as the production of other weapons. Another threat to humanity is the possibility that a good machine, one specifically created to be benevolent, may go bad, as was the case with Hal 9000 in 2001: A Space Odyssey. Evolutionary and learning algorithms may lead to a system that is essentially a black box, something so complicated that experts may be unable to understand its inner workings completely. As with humans, such machines may have extremely complex psychologies, so the potential for malevolence is non-zero (Goertzel & Pennachin, 2007). Even if special constraints are placed on the behavior of such systems, rules like “do not kill” could potentially be overwritten after successive updates initiated by the AGI system itself (Singularity Institute for Artificial Intelligence [SIAI], 2001, para. 2). Such a scenario may become greatly feared by the public, leading to what one researcher calls “The Artilect War.” In Hugo De Garis’ essay, “The Artilect War: B erkeley S cientific J ournal • S ave or D estroy • S pring 2012 • V olume 16 • I ssue 2 • 1 B S J Artificial Intelligence is all around us. It manages your investments, makes the subway run on time, diagnoses medical conditions, searches the internet, solves enormous systems of equations, and beats human players at chess and Jeopardy. However, this “narrow AI,” designed for solving specific, narrow problems, is something distinctly different from Artificial General Intelligence, or “AGI”, true thinking machines with human-like general intelligence (Wang, Goertzel, & Franklin, 2008, p. v). While AGI is not rigidly defined, it is often envisioned as being self-aware and capable of complex thought, and has been a staple of science fiction, appearing prominently in popular films such as 2001: A Space Odyssey, Terminator, and I, Robot. In each of these films, the machines go beyond their original programmed purpose and become violent threats to humans. This is a possibility which has been pondered extensively by those working in the field
Read full abstract