Baumeister, DeWall, and Vohs’ (2009) response to our article is as much a criticism of meta-analysis in general as it is of our particular meta-analysis and suggests some misunderstandings of the meta-analytic process and purpose. First, they suggested that nonpublished studies should not be included in meta-analyses because such studies lack methodological rigor. The unpublished studies we included largely came from students and laboratories of published authors and used the same paradigms as published experiments, thus sharing the same pedigree as the published research. Furthermore, including unpublished studies is a basic meta-analytic necessity, as many unpublished studies have nonsignificant effects. The ‘‘file drawer problem’’ (Rosenthal, 1979) refers to the existence of many studies that have not been published because the null hypothesis has not been rejected. Not taking these studies into account in evaluating an hypothesis can lead to serious mistakes. Second, Baumeister et al. argue that many studies showing a lack of mood effects following rejection were not included in our meta-analysis because journal editors insist on the deletion of statistics relating to nonsignificant results. If an effect size cannot be calculated, the result cannot be included in a metaanalysis. Although we agree in general with this comment, we were aware of the issue and did not use the low-effort strategy of which we are accused. Studies were excluded only after inability to obtain complete results from the researchers involved. In addition, unpublished studies included tend to report the means and standard deviations of nonsignificant effects. Nor was there systematic bias in favor of only excluding nonsignificant results—some studies with significant effects were also excluded if they could not be translated into the requisite effect sizes (e.g., regression analyses). If we could add any qualification to our results, it would be that the nonsignificant results we could not include often came from the life-alone paradigm. If the Baumeister group were to do their own meta-analysis and use statistics we did not possess, they might well find that the life-alone paradigm did not produce emotional responses but that other paradigms did. Defining the boundaries of rejection is difficult. Our metaanalysis found very little evidence for differences between paradigms, although distinctions, as we have just noted, may emerge in the future. We do, however, find it disturbing that Baumeister et al. are so insistent on the distinction between imagined and experienced affect, because the life-alone paradigm asks participants to imagine a future of rejection—they do not actually experience rejection. In contrast, in vivo inductions of rejection (e.g., groupwork, Cyberball) induce direct experiences of rejection and find mood effects. Furthermore, laboratory studies may underestimate the impact of rejection. The sole study that reports real-life tracking of rejection experiences found that anger is significantly affected by rejection experiences, although further research is needed as no other mood measures were collected (Nezlek, Williams, & Wheeler, 2008). We also stand by our coding of various measures into control. There has always been a degree of flexibility necessary when coding needs, starting right from Murray’s (1938) original explorations. As such, it is invalid to suggest that our coding must be restricted by previous researchers. Our coding had reasonable interrater reliability, and both raters started with only the American Psychological Association’s definition of control. There is a large literature linking control to antisocial responses (e.g., Geen, 1978) and emerging evidence that control-aggression schemas mediate the rejection-control-aggression relationship (Warburton, McIlwain, Cairns, & Taylor, 2006). That our coding led to such consistent results (i.e., cold pressor tasks show similar effect sizes to self-report items such as ‘‘I did not feel in control’’) was surprising to us but also suggestive of a new way of conceiving the rejected state. We agree that further research is needed. We note that Baumeister et al. misrepresent the position of Williams (2001). Williams argued that control is integral to the experience of ostracism because ostracism is unilateral— the ostracizer deprives the victim of a means of response. To the extent that other rejection paradigms are also unilateral, they may also affect control. The groupwork rejection paradigm and Address correspondence to Jonathan Gerber, Department of Psychology, Macquarie University, Sydney, New South Wales, 2109 Australia; e-mail: jgerber@psy.mq.edu.au. The only research group not represented by unpublished results is the Baumeister group, despite personal requests for such studies. This study was not included in the meta-analysis, as it reported only regression coefficients—a notoriously difficult data type for meta-analyses. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE