Abstract

A number of thinkers have been wondering about the moral obligations humans have, or will have, to intelligent technologies. An underlying assumption is that “moral machines” are decades in the offing, and thus we have no pressing obligations now. But, in the context of technology, we are yet to consider that we might owe moral consideration to something that is not a member of the moral community but eventually will be as an outcome of human action. Do we have current actual obligations to technologies that do not currently exist? If there are obligations to currently non-existing technologies, we must confront what might be called the Non-Identical Machines Problem. Can we harm or benefit an entity by making it one way rather than another? This paper presents the problem and argues that it is more challenging than the standard Non-Identity Problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call