Abstract
ABSTRACT Human beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known as the value alignment problem, the challenge of how to build AI that acts in accordance with our human values. We argue that there is an especially close connection between solving the value alignment problem in AI ethics and using AI to pursue certain forms of human enhancement. But in addition, we also argue that there are important limits to what kinds of human enhancement can be pursued in this way, because some forms of human enhancement—namely moral revolutions—involve a kind of value misalignment rather than alignment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.