Abstract

Machine learning models are everywhere now; but only few of them are transparent in how they work. To remedy this, local explanations aim to show users how and why learned models produce a certain output for a given input (data sample). However, most existing approaches are oriented around images or text data and, thus, cannot leverage the structure and properties of tabular data. Therefore, we demonstrate Quest, a new framework for generating explanations that are a better fit for tabular data. The main idea is to create explanations in the form of relational predicates (called queries hereafter) that approximate the behavior of a classifier around the given sample. For this demo, we use Quest on different synthetic and real-world tabular data sets and pair it with a user interface intended to be used during model development by a data scientist working on classification models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call