Abstract

Explainable AI (XAI) has started experiencing explosive growth, echoing the explosive growth that has preceded it of AI becoming used for practical purposes that impact the general public. This spread of AI into the world outside of research labs brings with it pressures and requirements that many of us have perhaps not thought about deeply enough. In this keynote address, I will explain why I think we have a very long way to go. One way to characterize our current state is that we're doing well, doing some explaining of some things. In a sense, this is reasonable: the XAI field is young, and still finding its way. However, moving forward demands progress in (at least) three areas. (1) How we go about XAI research: Explainable AI cannot succeed if the only research foundations brought to bear on it are AI foundations. Likewise, it cannot succeed if the only foundations used are from psychology, education, etc. Thus, a challenge for our emerging field is how to conduct XAI research in a truly effective multi-disciplinary fashion, that is based on an integration of foundations behind what we can make AI algorithms do, with solid, well-founded principles of explaining the complex ideas behind the algorithms to real people. Fortunately, a few researchers have started to build such foundations. (2) What we can succeed at explaining: So far, we as a field are doing a certain amount of cherry picking as to what we explain. We tend to choose what to explain by what we can figure out how to explain---but we are leaving too much out. One urgent case in point is the societal and legal need to explain fairness properties of AI systems. The above challenges are important, but the field is already becoming aware of them. Thus, this keynote will focus mostly on the third challenge, namely: (3) Who we can explain to. Who are the people we've even tried to explain AI to, so far? What are the societal implications of who we explain to well and who we do not? Our field has not even begun to consider this question. In this keynote I'll discuss why we have to explain to populations to whom we've given little thought---diverse people in many dimensions, including gender diversity, cognitive diversity, and age diversity. Addressing all of these challenges is necessary before we can claim to explain AI fairly and well.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.