Abstract

My research seeks insight into the complexity of computational reasoning under uncertain information. I focus on preference aggregation and social choice. Insights in these areas have broader impacts in the areas of complexity theory, autonomous agents, and uncertainty in artificial intelligence. Motivation: Planning and reasoning in nondeterministic settings is something that people take for granted every day. We do not know for certain that each small action we choose will succeed or fail, if the actions we choose will lead us to catastrophic consequences or land us safely on the other side of the street. The ability to reason in a domain where actions are not guaranteed to succeed is something that humans do fairly well and machines do not. The field of social choice allows us a rich set of domains and problems within which we can work. A central question of social choice is: how do we aggregate a (possibly) contradictory set of individual preferences and/or observations into an appropriate global decision? We focus on the question of manipulation of social choice functions when the individual agents’ preferences are represented as probability distributions rather than a set of deterministic preferences. This notion of uncertainty has been introduced hesitantly, if at all, in the existing literature. We wish to fill this gap. Background: The field of preference aggregation manipulation stems from that of social choice. Building on the work of Arrow [1], the Gibbard–Satterthwaite Theorem shows that any aggregation system, meeting a set of simple fairness conditions, can be manipulated by non-truthful voting [7, 12]. This was extended again by the Duggan–Schwartz Theorem to an even larger set of aggregation methods [4]. These results tell us that we cannot devise a “good” preference aggregation scheme that is immune to manipulation. This implies that groups can never come to provably fair, non-manipulated agreements. However, in the early 1990s, Bartholdi et al. proposed the idea of protecting the aggregation schemes through computational complexity [2]. The idea, much like cryptography, is: if it is difficult to compute a manipulation scheme then it is unlikely that there will be manipulation. The ComSoc community seeks to classify aggregation systems in terms of their susceptibility to manipulation. There is a rich literature on the computational complexity of elections [6], and on the worst-case complexity of manip-

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call