Abstract

There is growing consensus and appreciation for the importance of trust in the development of Artificial Intelligence (AI) technologies; however, there is a reliance on principles-based frameworks. Recent research has highlighted the principles/practice gap, where principles alone are not actionable, and may not be wholly effective in developing more trustworthy AI. We argue for complementary, evidence-based tools to close the principles/practice gap, and present ELATE (Evidence-Based List of Exploratory Questions for AI Trust Engineering) as one such resource. We discuss several tools or approaches for making ELATE actionable within the context of systems development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call