Abstract
It is becoming clear that, in the process of aligning AI with human values, one glaring ethical problem is that of value conflict. It is not obvious what we should do when two compelling values (such as autonomy and safety) come into conflict with one another in the design or implementation of a medical AI technology. This paper shares findings from a scoping review at the intersection of three concepts—AI, moral value, and health—that have to do with value conflict and arbitration. The paper looks at some important and unique cases of value conflict, and then describes three possible categories of value conflict: personal value conflict, interpersonal or intercommunal value conflict, and definitional value conflict. It then describes three general paths forward in addressing value conflict: additional ethical theory, additional empirical evidence, and bypassing the conflict altogether. Finally, it reflects on the efficacy of these three paths forward as ways of addressing the three categories of value conflict, and motions toward what is needed for better approaching value conflicts in medical AI.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have