Abstract

Artificial intelligence is becoming increasingly entwined with our daily lives: AIs work as assistants through our phones, control our vehicles, and navigate our vacuums. As these objects become more complex and work within our societies in ways that affect our well-being, there is a growing demand for machine ethics—we want a guarantee that the various automata in our lives will behave in a way that minimizes the amount of harm they create. Though many technologies exist as moral artifacts (and perhaps moral agents), the development of a truly ethical AI system is highly contentious; theorists have proposed and critiqued countless possibilities for programming these agents to become ethical. Many of these arguments, however, presuppose the possibility that an artificially intelligent system can actually be ethical. In this essay, I will explore a potential path to AI ethics by considering the role of imagination in the deliberative process via the work of John Dewey and his interpreters, showcasing one form of reinforcement learning that mimics imaginative deliberation. With these components in place, I contend that such an artificial agent is capable of something very near ethical behavior—close enough that we may consider it so.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call