Artificial Intelligence (AI) auditing is a relatively new area of work. Currently, there is a lack of uniform standards and regulation. As a result, the AI auditing ecosystem is very diverse, and AI auditing professionals use a variety of different auditing methods. So far, little is known about how AI auditors approach the concept of trust in AI through AI audits, in particular regarding the trust of users. This paper reports findings from interviews with 19 AI auditing stakeholders to understand how AI auditing professionals seek to create calibrated trust in AI tools and AI audits. Themes identified included the AI auditing ecosystem, participants' experiences with AI auditing, and trust in AI audits and AI. The paper adds to the existing research on trust in AI and trustworthiness in AI by adding perspectives of key stakeholders regarding trust in AI Audits by users as an essential and currently less explored part of the trust in AI research. This paper shows how information asymmetry in respect to AI audits can decrease the value of audits for users and consequently their trust in AI systems. Study participants suggest key elements for rebuilding trust and suggest recommendations for the AI auditing industry, such as monitoring of auditors and effective communication about AI audits.
Read full abstract