Abstract

Explainable Artificial Intelligence promises to deliver means so that humans better understand the rationale behind a particular machine learning model. In the image domain, such information is frequently conveyed through heat maps. Along the same line, information regarding defect detection for unsupervised methods applied to images can be conveyed through anomaly maps. Nevertheless, heat maps or anomaly maps can convey inaccurate information (artifacts), or their perceptions may differ across different persons. Therefore, the user experience could be enhanced by collecting human feedback and creating predictive models on how these could be recolored to bridge the gap between the original heat maps and anomaly maps created with explainability techniques and the output expected by humans. We envision this work as relevant in at least two scenarios. First, enhance anomaly and heat maps when conveying information regarding machine vision models deployed in production to remove information deemed unnecessary by the user but systematically present through the explainability technique due to underlying model issues (artifacts). Second, adapt anomaly and heat maps based on users’ perceptual needs and preferences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call