Abstract

Game theory is a useful tool for reasoning about interactions between agents and in turn aiding in the decisions of those agents. In fact, Stackelberg games are natural models for many important applications such as oligopolistic markets and security domains. Indeed, Stackelberg games are at the heart of three deployed systems, ARMOR; IRIS; and GUARDS, for aiding security officials in making critical resource allocation decisions. In Stackelberg games, one player, the leader, commits to a strategy and follower makes her decision with knowledge of the leader's commitment. Existing algorithms for Stackelberg games efficiently find optimal solutions (leader strategy), however, they critically assume that the follower plays optimally. Unfortunately, in many applications, agents face human followers (adversaries) who -- because of their bounded rationality and possibly limited information of the leader strategy -- may deviate from their expected optimal response. Not considering these likely deviations when dealing with human adversaries may cause an unacceptable degradation in the leader's reward, particularly in security applications where these algorithms have seen deployment. To that end, I explore robust algorithms for agent interactions with human adversaries in security applications. I have developed a number of robust algorithms for a class of games known as Security Games and am working toward enhancing these approaches for a richer models of these games that I developed known as Security Circumvention Games.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call