Abstract
One example of an artificial intelligence ethical dilemma is the autonomous vehicle situation presented by Massachusetts Institute of Technology researchers in the Moral Machine Experiment. To solve such dilemmas, the MIT researchers used a classic statistical method known as the hierarchical Bayesian (HB) model. This paper builds upon previous work for modeling moral decision making, applies a deep learning method to learn human ethics in this context, and compares it to the HB approach. These methods were tested to predict moral decisions of simulated populations of Moral Machine participants. Overall, test results indicate that deep neural networks can be effective in learning the group morality of a population through observation, and outperform the Bayesian model in the cases of model mismatches.
Highlights
IntroductionWith the rapid development toward automation, future reliance on artificial intelligence (AI) for everyday tasks is clear
With the rapid development toward automation, future reliance on artificial intelligence (AI) for everyday tasks is clear. Often embedded within these tasks are small moral decisions: for example, is violating a minor traffic law justified when it saves the time of others? While humans take these small ethical decisions for granted, society must properly equip AI products with moral compasses if we are to entrust machines even with small daily tasks
The predictive accuracy increases with greater training samples, as the model is given a greater sample size to learn the distribution
Summary
With the rapid development toward automation, future reliance on artificial intelligence (AI) for everyday tasks is clear. Often embedded within these tasks are small moral decisions: for example, is violating a minor traffic law justified when it saves the time of others? Public acceptance of AI as responsible moral agents is one of the greatest obstacles facing automation and machine learning. Current research in AI moral decision making often theorizes abstract and general approaches to training moral agents [3, 4] For example, Shaw et al [4] proposes a machine learning framework where a group of statistically trained models determine a moral action based on each individual model’s decision, and the confidence each model has in the morality of other models [4]. Still, reducing complex moral scenarios to a form that a framework can digest is obtuse
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Visual Computing for Industry, Biomedicine, and Art
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.