Abstract

Since the advent of Artificial Intelligence, many autonomous machines are making their way into the society. With the burgeoning development of autonomous systems like self-driving cars have come concerns about how machines will make moral decisions and thus a new field called Machine Ethics has emerged. Machine ethics deals with moral dilemmas in machines while interacting with humans, or possibly other machines as well, and ensures the decisions taken by the algorithm are morally acceptable. This is in contrast to computer ethics, which solely focuses on ethical problems and protocol surrounding humans' use of technology. In this article, we have explored the moral dilemmas faced by autonomous vehicles and have tried to train an artificial intelligence model that makes ethically acceptable decisions based on the data collected by the famous moral machine experiment. Here, we describe the results obtained from the model. Firstly, we summarize the accuracies obtained upon training multiple models with different techniques. Later, we document the variation of accuraciesin the model upon using the Hofstede model of six dimensions of national cultures as a factor when pre-processing the data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call