Abstract

AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and AI audit frameworks. We highlight the responsibilities of funding bodies to ensure investment is channelled towards trustworthy and safe AI systems and provides case studies as to how other ethical funding principles are managed. We offer a first sight of two proposals for funding bodies to consider regarding procedures they can employ. The first proposal is for the inclusion of a Trustworthy AI Statement’ section in the grant application form and offers an example of the associated guidance. The second proposal outlines the wider management requirements of a funding body for the ethical review and monitoring of funded projects to ensure adherence to the proposed ethical strategies in the applicants Trustworthy AI Statement. The anticipated outcome for such proposals being employed would be to create a ‘stop and think’ section during the project planning and application procedure requiring applicants to implement the methods for the ethically aligned design of AI. In essence it asks funders to send the message “if you want the money, then build trustworthy AI!”.

Highlights

  • Trustworthy AI1 has been a focus in the data science and Artificial Intelligence (AI) field for several years

  • One of the solutions we propose in this paper is a simple adjustment to the application procedure which requires a Trustworthy AI Statement, in which applicants must outline their plans to ensure they follow an ethically aligned design approach

  • The expectation is that the funding body will provide call specific guidance regarding the expected requirements for ethical design of the proposed AI systems

Read more

Summary

Introduction

Trustworthy AI1 has been a focus in the data science and AI field for several years. It has increased significantly in prominence and urgency with recent controversies involv‐ ing public sector systems [1] and influencing elections [2].In the UK, August 2020 was a pivotal month in light of a number of legal cases and decisions challenging the use of1 3 Vol.:(0123456789)AI and Ethics some AI and machine learning systems. Examples include the judgement of the UK government visa streaming algo‐ rithm in August 2020 and its resultant suspension [3] This landmark legal challenge highlighted the human rights and equalities that can be caused by some AI systems [4]. In August 2020, we saw the public uproar and legal challenge caused by the algorithm employed to predict grades for students. This clearly indicates that public awareness and impetus to hold AI systems to account is increasing. Many cases have been heard globally and have been upheld [8, 9]

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call