Abstract

We are witnessing the emergence of an “AI economy and society” where AI technologies and applications are increasingly impacting health care, business, transportation, defense and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges to their adoption. These recent shortcomings and concerns have been documented in both the scientific and general press such as accidents with self-driving cars, biases in healthcare or hiring and face recognition systems for people of color, and seemingly correct decisions later found to be made due to wrong reasons etc. This has resulted in the emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems have motivated intense research on explainable AI systems (XAI). The original aim of XAI is to provide human understandable information of how AI systems make their decisions in order to increase user trust. In this paper we first very briefly summarize current XAI work and then challenge the recent arguments that present “accuracy vs. explainability” as being mutually exclusive and for focusing mainly on deep learning with its limited XAI capabilities. We then present our recommendations for the broad use of XAI in all stages of delivery of high stakes trustworthy AI systems, e.g., development; validation/certification; and trustworthy production and maintenance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call