Abstract

Explanation abilities are required for data-driven models, where the high number of parameters may render its internal reasoning opaque to users. Despite the natural transparency brought by the graphical model structure of Bayesian networks, decisions trees or valuation networks, additional explanation abilities are still required due to both the complexity of the problem as well as the consequences of the decision to be taken. Threat assessment is an example of such a complex problem in which several sources with partially unknown behaviour provide information on distinct but related frames of discernment. In this paper, we propose a solution as an evidential network with explanation abilities to detect and investigate threat to maritime infrastructure. We propose a post-hoc explanation approach to an already transparent by design threat assessment model, combining feature relevance and natural language explanations with some visual support. To this end, we extend the sensitivity analysis method of generation of explanations for evidential reasoning to a multi-source model where sources can have several and disparate behaviours. Natural language explanations are generated on the basis of a series of sensitivity measures quantifying the impact of both direct reports and source models. We conclude on challenges to be addressed in future work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call