Abstract

AbstractThe dream of making conscious humanoid robots is one that has long tantalized humanity, yet today it seems closer than ever before. Assuming that science can make it happen, the question becomes: should we make it happen? Is it morally permissible to create synthetic beings with consciousness? While a consequentialist approach may seem logical, attempting to assess the potential positive and negative consequences of such a revolutionary technology is highly speculative and raises more questions than it answers. Accordingly, some turn to ancient and not-so-ancient stories of “automata” for direction. Of the many automata conjured throughout history, if not in matter then in mind, the Golem stands out as one of the most persistent paradigms employed to discuss technology in general and technologically engendered life forms in particular. In this essay, I introduce a novel reading of the Golem paradigm to argue not from consequentialism, but from a deep-seated two-thousand-year-old tradition, the ethical implications of which are wholly deontological.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.