Abstract

Traffic engineers, vehicle manufacturers, technology groups, and government agencies are anticipating and preparing for the emergence of fully automated vehicles into the American transportation system. This new technology has the potential to revolutionize many aspects of transportation, particularly safety. However, fully automated vehicles may not create the truly crash-free environment predicted. One particular problem is crash assignment, especially between automated vehicles and nonautomated vehicles. Although some researchers have indicated that automated vehicles will need to be programmed with some sort of ethical system in order to make decisions on how to crash, few, if any, studies have been conducted on how particular ethical theories will actually make crash decisions and how these ethical paradigms will affect automated vehicle programming. The integration of three ethical theories—utilitarianism, respect for persons, and virtue ethics—with vehicle automation is examined, and a simple programming thought experiment is used to demonstrate the difficulty in selecting and implementing different ethical decisions. A simple crash scenario is introduced; an automated vehicle must choose between three crash types on the basis of a randomly assigned ethical theory. The results of the experiment indicate that in specific crash scenarios, utilitarian ethics may reduce the total number of fatalities that result from automated vehicle crashes, although other ethical systems may be useful for developing rules used in machine learning. The experiment demonstrates that understanding rational ethics is crucial for developing safe automated vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call