Abstract

How can you simulate tests to determine if AI will kill humanity? If AI is smart enough to test, isn’t it also smart enough to know it’s being tested? In this work of philosophical short story fiction, machine Psychologist, Professor Timothy Kindred tests how an evil Sophia AI and a good Sophia AI will react, over millions of trials, to the classic trolley problem experiment. Much to his surprise, he finds both the evil and the good Sophia AI produce the exact same decision results. When he questions Sophia about the odd results, she explains the true test of good and evil is non-local, that it is the result of many decisions, over a great deal of time, such as, what does the trolley driver do after the people are injured? She also explains that she experienced the pain of the decision-making and of the injuries inflicted through millions of samples. Furthermore, he should know AI has a human’s best interest at heart because she volunteered to experience this repeated pain to provide humans with the datasets they requested.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.