Abstract

There is widespread agreement that, while artificial intelligence offers significant potential benefits for individuals and society at large, there are also serious challenges to overcome with respect to its governance. Recent policymaking has focused on establishing principles for the trustworthy use of AI. Adhering to these principles is especially important to adhere to protect vulnerable groups and ensure their confidence in the technology and its uses. Using the Department of Veterans Affairs as a case study, we focus on three principles of particular interest: (i) designing, developing, acquiring, and using AI where the benefits of use significantly outweigh the risks and the risks are assessed and managed, (ii) ensuring that the application of AI occurs in well-defined domains and are accurate, effective, and fit for intended purposes, and (iii) ensure the operations and outcomes of AI applications are sufficiently interpretable and understandable by all subject matter experts, users, and others. We argue that these principles and applications apply to vulnerable groups more generally and that adherence to them can allow the VA and other organizations to continue modernizing its technology governance, leveraging the gains of AI and managing its risks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call