Abstract

Abstract argumentation constitutes both a major research strand and a key approach that provides the core reasoning engine for a multitude of formalisms in computational argumentation in AI. Reasoning in abstract argumentation is carried out by viewing arguments and their relationships as abstract entities, with argumentation frameworks (AFs) being the most commonly used abstract formalism. Argumentation semantics then drive the reasoning by specifying formal criteria on which sets of arguments, called extensions, can be deemed as jointly acceptable. Such extensions provide a basic way of explaining argumentative acceptance. Inspired by recent research, we present a more general class of explanations: in this paper we propose and study so-called strong explanations for explaining argumentative acceptance in AFs. A strong explanation is a set of arguments such that a target set of arguments is acceptable in each subframework containing the explaining set. We formally show that strong explanations form a larger class than extensions, in particular giving the possibility of having smaller explanations. Moreover, assuming basic properties, we show that any explanation strategy, broadly construed, is a strong explanation. We show that the increase in variety of strong explanations comes with a computational trade-off: we provide an in-depth analysis of the associated complexity, showing a jump in the polynomial hierarchy compared to extensions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call