Abstract

With tremendous rise in deep learning adoption comes questions about the trustworthiness of the deep neural networks that power a variety of applications. In this work, we introduce the concept of trust matrix, a novel trust quantification strategy that leverages the recently introduced question-answer trust metric by Wong et al. to provide deeper, more detailed insights into where trust breaks down for a given deep neural network given a set of questions. More specifically, a trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario, allowing one to quickly spot areas of low trust that needs to be addressed in order to improve the trustworthiness of a deep neural network. We further extend the concept of trust densities with the notion of conditional trust densities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call