Abstract

John Harris discusses the problem of other minds, not as it relates to other human minds, but rather as it relates to artificial intelligences. He also discusses what might be called bilateral mind-reading: humans trying to read the minds of artificial intelligences and artificial intelligences trying to read the minds of humans. Lastly, Harris discusses whether super intelligent AI – if it could be created – should be afforded moral consideration, and also how we might convince super intelligent AI that we ourselves should be treated with moral consideration. In this commentary, I discuss these issues brought up by Harris. I focus specifically on robots (rather than AI in general), and I set aside future super intelligent AI to instead focus on more limited forms of AI. I argue that the human tendency to attribute minds even to robots with very limited AI and whether such robots should be given moral consideration are more pressing issues than those that Harris discusses, even though I certainly agree with Harris that the potential for super intelligent AI is a fascinating topic to speculate about.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.