Abstract
In the past several years there has been a rapid development of new technologies, applications, organizations, and institutions in the area of artificial intelligence (AI). At the same time, ethical reasoning about AI has not been able to keep up with the speed of these advances. As a result, developers are left to rely on existing rules, professional codes, policies and personal ethics which may not provide the appropriate guidance about ethical conduct and may require greater specificity (O'Leary). Many commentators have acknowledged the need for a clearer understanding of ethical values and principles to guide AI research. Drawing on insights from the formation of the field of biomedical ethics, we argue that AI ethics should make use of the method based on prima facie duties derived from W.D. Ross’ approach to ethics. A Rossian approach has proved influential in biomedical ethics. We further propose a modification to the list of principles proposed by Floridi and Cowls, arguing that the principles of explicability and accountability should be separated for ease of application. We argue that this method of applying principles is just what has been missing in AI ethics and is the crucial link between the now common lists of principles and putting them into practice in a way that can inform actual developments on the ground.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have