Abstract

Explainable AI (xAI) refers to the concept and set of techniques that aim to make artificial intelligence (AI) systems more transparent and understandable to humans. The increased use of AI models in complex tasks and critical domains drives the current focus on xAI to enable humans to comprehend and trust the decisions made by AI systems. The fast-growing literature base on xAI means researchers need support in identifying the core research areas and current research patterns effectively, efficiently and with provenance. Our research design includes a systematic literature review (SLR) of xAI publications whereafter the same dataset was analyzed using two NLP techniques, namely Latent Derelict Allocation (LDA) and Encoder Representations from Transformers (BERT). The wordlists of possible xAI topics created from LDA and BERT were independently labelled by three xAI researchers and an xAI expert selected the most appropriate label. These results were then triangulated with the results from the SLR to gain new insights into xAI research topics and trends.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call