Abstract

Analyzing textual data by means of AI models has been recognized as highly relevant in information systems research and practice, since a vast amount of data on eCommerce platforms, review portals or social media is given in textual form. Here, language models such as BERT, which are deep learning AI models, constitute a breakthrough and achieve leading-edge results in many applications of text analytics such as sentiment analysis in online consumer reviews. However, these language models are “black boxes”: It is unclear how they arrive at their predictions. Yet, applications of language models, for instance, in eCommerce require checks and justifications by means of global reconstruction of their predictions, since the decisions based thereon can have large impacts or are even mandatory due to regulations such as the GDPR. To this end, we propose a novel XAI approach for global reconstructions of language model predictions for token-level classifications (e.g., aspect term detection) by means of linguistic rules based on NLP building blocks (e.g., part-of-speech). The approach is analyzed on different datasets of online consumer reviews and NLP tasks. Since our approach allows for different setups, we further are the first to analyze the trade-off between comprehensibility and fidelity of global reconstructions of language model predictions. With respect to this trade-off, we find that our approach indeed allows for balanced setups for global reconstructions of BERT’s predictions. Thus, our approach paves the way for a thorough understanding of language model predictions in text analytics. In practice, our approach can assist businesses in their decision-making and supports compliance with regulatory requirements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call