Abstract

In Abstract Argumentation, given the same AA framework rational agents accept the same arguments unless they reason by different AA semantics. Real agents may not do so in such situations, and in this paper we assume that this is because they have different preferences over the confronted arguments. Hence by reconstructing their reasoning processes, we can learn their hidden preferences, which then allow us to predict what else they must accept. Concretely we formalize and develop algorithms for such problems as learning the hidden preference relation of an agent from his expressed opinion, by which we mean a subset of arguments or attacks he accepted; and learning the collective preferences of a group from a dataset of individual opinions. A major challenge we addressed in this endeavor is to represent and reason with “answer sets” of preference relations which are generally exponential or even infinite.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call