Abstract

AbstractIn the domain of visual question answering (VQA), studies have shown improvement in users' mental model of the VQA system when they are exposed to examples of how these systems answer certain image‐question (IQ) pairs. In this work, we show that showing controlled counterfactual IQ examples are more effective at improving the mental model of users as compared to simply showing random examples. We compare a generative approach and a retrieval‐based approach to show counterfactual examples. We use recent advances in generative adversarial networks to generate counterfactual images by deleting and inpainting certain regions of interest in the image. We then expose users to changes in the VQA system's answer on those altered images. To select the region of interest for inpainting, we experiment with using both human‐annotated attention maps and a fully automatic method that uses the VQA system's attention values. Finally, we test the user's mental model by asking them to predict the model's performance on a test counterfactual image. We note an overall improvement in users' accuracy to predict answer change when shown counterfactual explanations. While realistic retrieved counterfactuals obviously are the most effective at improving the mental model, we show that a generative approach can also be equally effective.

Highlights

  • With the growing application of AI in high-risk domains, it is important for human users to understand the extent and limits of AI system competencies to ensure efficient and safe deployment of such systems

  • Counterfactual examples help over showing random examples To check how much we gain in the mental model from providing more information, we check the performance of users when we show two random examples to the users

  • We demonstrated that showing counterfactual images is helpful for the mental model improvement of users in predicting a visual question answering (VQA) model’s performance

Read more

Summary

Introduction

With the growing application of AI in high-risk domains, it is important for human users to understand the extent and limits of AI system competencies to ensure efficient and safe deployment of such systems. We need effective approaches to improve the end users’ mental model of the deep neural network-based AI systems. We examine the effect of exposing the users to explanatory examples where the inputs are changed in a controlled manner in order to better observe how the machine output changes to controlled changes in the input. We call these controlled changes in input, “counterfactuals”. We hypothesize that such controlled changes in the examples shown are better for mental model improvement than showing random examples

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call