Carefully perturbing adversarial inputs degrades the performance of traditional machine learning (ML) models. Adversarial machine learning (AML) that takes adversaries into account during training and learning emerges as a valid technique to defend against attacks. Due to the complexity and uncertainty of adversaries’ attack strategies, researchers utilize game theory to study the interactions between an adversary and an ML system designer. By configuring different game rules and analyzing game outcomes in an adversarial game, it is possible to effectively predict attack strategies and to produce optimal defense strategies for the system designer. However, the literature still lacks a holistic review of adversarial games in AML. In this paper, we extend the scope of previous surveys and provide a thorough overview of existing game theoretical approaches in AML for adaptively defending against adversarial attacks. For evaluating these approaches, we propose a set of metrics to discuss their merits and drawbacks. Finally, based on our literature review and analysis, we raise several open problems and suggest interesting research directions worthy of special investigation.