Abstract

Growing up in the late 1950’s in America, mechanical robots were part of my culture, but certainly not of industry (unless it was the toy industry, which capitalized on the delight young children—mostly boys—took in fictionalized hulking tons of intelligent steel). In film we had, among many instances of cinematic science fiction, the human-friendly Robbie the Robot (in ‘‘Forbidden Planet’’) and the malevolent human-hater in ‘‘Gog’’. In the science fiction literature, Isaac Asimov amazed us with his Foundation trilogy and his Three Laws of Robotics (which were, in ingenious ways, invariably violated). Since that time, robots in culture have become more sophisticated (R2D2 in Star Wars), more human-like (Terminator) and certainly more sinister as they acquire more intelligence and strength. But robots in industry have been mostly drones, capable of performing repetitive mechanical functions far better than humans (and that humans dislike doing): vacuums in the house, conveying bulk goods at Wal-Mart distribution centers, construction of automobiles, and much more. It is clear that robots are with us, both commercially and in our culture, and that as technology advances, their numbers will only increase. A recent issue of New Scientist (Vol. 204, No. 2735, 2009) contains a survey article on robotic surgery (medibots) and shorter articles on robotic arms (for users of wheelchairs to open doors), and robotic gloves (for NASA). Extrapolating into the future, doomsday scenarios of robots taking over and enslaving humans (or exterminating them) have been taken very seriously by those (Bill Joy—’’Why the Future Doesn’t Need Us’’, Wired, 2000) who advocate that restrictions now be placed (at both the theoretical and technological levels) on the construction of robots. In particular, the call of the ‘‘doomsters’’ is to severely limit the reasoning capabilities of robots, if not to put a halt to development and construction of them. Should we heed their cautions? In their superb book Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen argue that we should not. Indeed, we should provide robots, where technologically possible, with the ability to engage in ethical reasoning and ethical decision-making. (They call such robots ‘‘artificial moral agents’’, or AMAs.) But why? Wouldn’t we, in doing this, be taking undeniable steps toward our assured destruction? Wallach and Allen (hereafter WA JBuechner@gc.cuny.edu

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call