Abstract

Argumentative agents in AI are inspired by how humans reason by exchange of arguments. Given the same set of arguments possibly attacking one another (Dung's AA framework) these agents are bound to accept the same subset of those arguments (aka extension) unless they reason by different argumentation semantics. However humans may not be so predictable, and in this paper we assume that this is because any real agent's reasoning is inevitably influenced by her own preferences over the arguments. Though such preferences are usually unobservable, their effects on the agent's reasoning cannot be washed out. Hence by reconstructing her reasoning process, we might uncover her hidden preferences, which then allow us to predict what else the agent must accept. Concretely we formalize and develop algorithms for such problems as uncovering the hidden argument preference relation of an agent from her expressed opinion, by which we mean a subset of arguments or attacks she accepted from a given AA framework; and uncovering the collective preferences of a group from a dataset of individual opinions. A major challenge we addressed in this endeavor is to deal with “answer sets” of argument preference relations which are generally exponential or even infinite. So we start by developing a compact representation for such answer sets called preference states. Preference revelation tasks are then structured as derivations of preference states from data, and reasoning prediction tasks are reduced to manipulations of derived preference states without enumerating the underlying (possibly infinite) answer sets. We also apply the presented results to two non-trivial problems: learning preferences over rules in structured argumentation with priorities – an open problem so far; and analyzing public polls in apparently deeper ways than existing social argumentation frameworks allow.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call