Abstract

Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned Artificial Superintelligence (ASI), such as Coherent Extrapolated Volition (CEV), have focused on ensuring that an Artificial Superintelligence would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds, could also be affected by the ASI’s behavior in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated Volition, an alternative to CEV, that directly takes into account the interests of all sentient beings. This ambitious value learning proposal would significantly reduce the likelihood of risks of astronomical suffering from the ASI’s behavior, and thus we have very strong pro-tanto moral reasons in favor of implementing it instead of the CEV. This fact is crucial in conducting an adequate cost–benefit analysis between different ambitious value learning proposals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.